Jan 26 23:08:14 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 23:08:14 crc restorecon[4686]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:14 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 23:08:15 crc restorecon[4686]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 23:08:16 crc kubenswrapper[4995]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 23:08:16 crc kubenswrapper[4995]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 23:08:16 crc kubenswrapper[4995]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 23:08:16 crc kubenswrapper[4995]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 23:08:16 crc kubenswrapper[4995]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 23:08:16 crc kubenswrapper[4995]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.276917 4995 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283074 4995 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283120 4995 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283138 4995 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283147 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283155 4995 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283162 4995 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283226 4995 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283243 4995 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283252 4995 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283260 4995 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283268 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283287 4995 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283295 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283302 4995 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283308 4995 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283315 4995 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283321 4995 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283327 4995 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283333 4995 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283339 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283346 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283352 4995 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283358 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283364 4995 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283371 4995 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283377 4995 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283383 4995 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283389 4995 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283395 4995 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283401 4995 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283407 4995 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283413 4995 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283419 4995 feature_gate.go:330] unrecognized feature gate: Example Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283425 4995 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283433 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283440 4995 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283449 4995 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283460 4995 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283467 4995 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283474 4995 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283481 4995 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283488 4995 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283494 4995 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283501 4995 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283507 4995 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283513 4995 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283520 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283526 4995 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283532 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283538 4995 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283544 4995 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283550 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283557 4995 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283565 4995 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283572 4995 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283580 4995 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283592 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283600 4995 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283607 4995 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283644 4995 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283651 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283658 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283665 4995 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283671 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283682 4995 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283690 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283697 4995 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283704 4995 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283711 4995 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283718 4995 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.283726 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283869 4995 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283886 4995 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283900 4995 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283911 4995 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283921 4995 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283929 4995 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283940 4995 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283950 4995 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283958 4995 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283966 4995 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283975 4995 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283983 4995 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283992 4995 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.283999 4995 flags.go:64] FLAG: --cgroup-root="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284007 4995 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284015 4995 flags.go:64] FLAG: --client-ca-file="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284022 4995 flags.go:64] FLAG: --cloud-config="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284030 4995 flags.go:64] FLAG: --cloud-provider="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284038 4995 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284050 4995 flags.go:64] FLAG: --cluster-domain="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284057 4995 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284065 4995 flags.go:64] FLAG: --config-dir="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284073 4995 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284081 4995 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284092 4995 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284120 4995 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284129 4995 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284138 4995 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284145 4995 flags.go:64] FLAG: --contention-profiling="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284153 4995 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284160 4995 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284168 4995 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284183 4995 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284194 4995 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284202 4995 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284211 4995 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284219 4995 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284226 4995 flags.go:64] FLAG: --enable-server="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284234 4995 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284243 4995 flags.go:64] FLAG: --event-burst="100" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284251 4995 flags.go:64] FLAG: --event-qps="50" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284259 4995 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284267 4995 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284274 4995 flags.go:64] FLAG: --eviction-hard="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284284 4995 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284291 4995 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284298 4995 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284306 4995 flags.go:64] FLAG: --eviction-soft="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284313 4995 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284323 4995 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284331 4995 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284338 4995 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284346 4995 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284354 4995 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284362 4995 flags.go:64] FLAG: --feature-gates="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284371 4995 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284379 4995 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284386 4995 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284394 4995 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284402 4995 flags.go:64] FLAG: --healthz-port="10248" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284410 4995 flags.go:64] FLAG: --help="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284417 4995 flags.go:64] FLAG: --hostname-override="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284424 4995 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284432 4995 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284439 4995 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284446 4995 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284453 4995 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284461 4995 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284468 4995 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284476 4995 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284483 4995 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284492 4995 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284500 4995 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284508 4995 flags.go:64] FLAG: --kube-reserved="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284515 4995 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284523 4995 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284531 4995 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284538 4995 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284545 4995 flags.go:64] FLAG: --lock-file="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284553 4995 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284562 4995 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284570 4995 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284585 4995 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284594 4995 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284603 4995 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284612 4995 flags.go:64] FLAG: --logging-format="text" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284619 4995 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284628 4995 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284635 4995 flags.go:64] FLAG: --manifest-url="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284642 4995 flags.go:64] FLAG: --manifest-url-header="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284653 4995 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284661 4995 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284671 4995 flags.go:64] FLAG: --max-pods="110" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284679 4995 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284687 4995 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284695 4995 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284704 4995 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284711 4995 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284719 4995 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284727 4995 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284746 4995 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284754 4995 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284762 4995 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284769 4995 flags.go:64] FLAG: --pod-cidr="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284777 4995 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284789 4995 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284797 4995 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284806 4995 flags.go:64] FLAG: --pods-per-core="0" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284815 4995 flags.go:64] FLAG: --port="10250" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284823 4995 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284830 4995 flags.go:64] FLAG: --provider-id="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284838 4995 flags.go:64] FLAG: --qos-reserved="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284846 4995 flags.go:64] FLAG: --read-only-port="10255" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284854 4995 flags.go:64] FLAG: --register-node="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284863 4995 flags.go:64] FLAG: --register-schedulable="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284871 4995 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284885 4995 flags.go:64] FLAG: --registry-burst="10" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284893 4995 flags.go:64] FLAG: --registry-qps="5" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284900 4995 flags.go:64] FLAG: --reserved-cpus="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284908 4995 flags.go:64] FLAG: --reserved-memory="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284918 4995 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284927 4995 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284935 4995 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284942 4995 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284950 4995 flags.go:64] FLAG: --runonce="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284958 4995 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284967 4995 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284975 4995 flags.go:64] FLAG: --seccomp-default="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284982 4995 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284990 4995 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.284998 4995 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285007 4995 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285015 4995 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285022 4995 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285030 4995 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285038 4995 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285046 4995 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285054 4995 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285062 4995 flags.go:64] FLAG: --system-cgroups="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285069 4995 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285082 4995 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285089 4995 flags.go:64] FLAG: --tls-cert-file="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285097 4995 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285130 4995 flags.go:64] FLAG: --tls-min-version="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285138 4995 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285146 4995 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285154 4995 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285162 4995 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285170 4995 flags.go:64] FLAG: --v="2" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285181 4995 flags.go:64] FLAG: --version="false" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285190 4995 flags.go:64] FLAG: --vmodule="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285199 4995 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.285208 4995 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285395 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285406 4995 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285413 4995 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285421 4995 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285428 4995 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285436 4995 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285443 4995 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285449 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285456 4995 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285463 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285470 4995 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285476 4995 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285483 4995 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285489 4995 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285496 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285504 4995 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285510 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285517 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285524 4995 feature_gate.go:330] unrecognized feature gate: Example Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285530 4995 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285537 4995 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285544 4995 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285550 4995 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285557 4995 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285564 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285572 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285580 4995 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285587 4995 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285594 4995 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285604 4995 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285612 4995 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285619 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285626 4995 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285634 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285641 4995 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285647 4995 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285654 4995 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285661 4995 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285667 4995 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285674 4995 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285681 4995 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285688 4995 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285695 4995 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285701 4995 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285708 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285714 4995 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285722 4995 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285728 4995 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285738 4995 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285747 4995 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285756 4995 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285762 4995 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285770 4995 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285777 4995 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285784 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285791 4995 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285797 4995 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285805 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285812 4995 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285819 4995 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285826 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285832 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285841 4995 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285848 4995 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285854 4995 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285862 4995 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285869 4995 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285878 4995 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285887 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285897 4995 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.285907 4995 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.286712 4995 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.300020 4995 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.300068 4995 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300214 4995 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300235 4995 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300245 4995 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300254 4995 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300265 4995 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300275 4995 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300284 4995 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300292 4995 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300300 4995 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300308 4995 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300316 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300324 4995 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300332 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300339 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300349 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300356 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300364 4995 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300372 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300379 4995 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300389 4995 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300403 4995 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300412 4995 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300421 4995 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300429 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300437 4995 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300445 4995 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300453 4995 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300460 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300469 4995 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300477 4995 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300485 4995 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300492 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300500 4995 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300508 4995 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300515 4995 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300523 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300531 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300538 4995 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300547 4995 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300555 4995 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300562 4995 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300570 4995 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300577 4995 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300585 4995 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300593 4995 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300600 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300608 4995 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300616 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300623 4995 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300632 4995 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300640 4995 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300648 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300656 4995 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300664 4995 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300672 4995 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300680 4995 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300687 4995 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300695 4995 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300704 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300712 4995 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300719 4995 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300727 4995 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300734 4995 feature_gate.go:330] unrecognized feature gate: Example Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300742 4995 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300750 4995 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300758 4995 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300768 4995 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300779 4995 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300789 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300798 4995 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.300807 4995 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.300820 4995 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301040 4995 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301052 4995 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301065 4995 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301075 4995 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301085 4995 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301094 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301123 4995 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301132 4995 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301141 4995 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301150 4995 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301158 4995 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301168 4995 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301178 4995 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301186 4995 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301195 4995 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301203 4995 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301211 4995 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301218 4995 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301226 4995 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301234 4995 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301242 4995 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301252 4995 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301261 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301269 4995 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301276 4995 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301284 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301292 4995 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301300 4995 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301308 4995 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301315 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301323 4995 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301331 4995 feature_gate.go:330] unrecognized feature gate: Example Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301338 4995 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301346 4995 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301354 4995 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301362 4995 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301370 4995 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301379 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301387 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301395 4995 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301402 4995 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301410 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301418 4995 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301426 4995 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301435 4995 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301442 4995 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301453 4995 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301463 4995 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301472 4995 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301482 4995 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301492 4995 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301502 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301510 4995 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301519 4995 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301527 4995 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301536 4995 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301545 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301554 4995 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301562 4995 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301571 4995 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301579 4995 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301587 4995 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301595 4995 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301603 4995 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301611 4995 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301619 4995 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301627 4995 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301635 4995 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301642 4995 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301650 4995 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.301658 4995 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.301670 4995 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.301910 4995 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.310370 4995 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.310498 4995 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.312285 4995 server.go:997] "Starting client certificate rotation" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.312378 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.312664 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-08 14:56:24.808743184 +0000 UTC Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.312767 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.339155 4995 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.342050 4995 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.344848 4995 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.359687 4995 log.go:25] "Validated CRI v1 runtime API" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.399550 4995 log.go:25] "Validated CRI v1 image API" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.401378 4995 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.407953 4995 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-23-04-35-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.408002 4995 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:44 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.429782 4995 manager.go:217] Machine: {Timestamp:2026-01-26 23:08:16.426745378 +0000 UTC m=+0.591452863 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:95aab811-f2d5-4faf-a048-4477d37cf623 BootID:d1cbdfe9-1842-4004-b68d-332d972c0049 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:44 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:6d:a8:dd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:6d:a8:dd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ae:40:08 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:95:68:1f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d7:c9:65 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:cb:39:ea Speed:-1 Mtu:1496} {Name:eth10 MacAddress:86:30:34:38:09:57 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ee:1b:7b:4a:1f:c2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.430011 4995 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.430204 4995 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.431723 4995 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.431926 4995 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.431960 4995 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.432182 4995 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.432195 4995 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.432769 4995 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.432799 4995 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.432995 4995 state_mem.go:36] "Initialized new in-memory state store" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.433081 4995 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.437788 4995 kubelet.go:418] "Attempting to sync node with API server" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.437811 4995 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.437840 4995 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.437853 4995 kubelet.go:324] "Adding apiserver pod source" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.437864 4995 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.441765 4995 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.442530 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.442635 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.442533 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.442693 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.443022 4995 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.445977 4995 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447472 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447503 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447517 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447528 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447547 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447559 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447571 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447630 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447654 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447672 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447696 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.447727 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.448872 4995 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.449681 4995 server.go:1280] "Started kubelet" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.450595 4995 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.450646 4995 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 23:08:16 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.453462 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.453503 4995 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.453999 4995 server.go:460] "Adding debug handlers to kubelet server" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.454210 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 21:58:30.169092632 +0000 UTC Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.450744 4995 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.455215 4995 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.455220 4995 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.455244 4995 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.455656 4995 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.455854 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.455230 4995 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.455906 4995 factory.go:55] Registering systemd factory Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.456020 4995 factory.go:221] Registration of the systemd container factory successfully Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.455928 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.456611 4995 factory.go:153] Registering CRI-O factory Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.456637 4995 factory.go:221] Registration of the crio container factory successfully Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.457035 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.457450 4995 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.457504 4995 factory.go:103] Registering Raw factory Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.457526 4995 manager.go:1196] Started watching for new ooms in manager Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.459715 4995 manager.go:319] Starting recovery of all containers Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.461914 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e6a9615fbb324 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:08:16.449639204 +0000 UTC m=+0.614346709,LastTimestamp:2026-01-26 23:08:16.449639204 +0000 UTC m=+0.614346709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.474851 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.474982 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.474998 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475010 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475023 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475037 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475049 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475061 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475075 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475123 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475134 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475145 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475157 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475170 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475182 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475194 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475231 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475263 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475274 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475285 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475296 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475306 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475319 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475336 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475387 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475401 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475445 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475464 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475481 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475497 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475511 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475548 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475612 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475627 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475646 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475657 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475669 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475708 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475721 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475733 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475766 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475795 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475809 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475949 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475962 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475973 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475985 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.475997 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476036 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476047 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476059 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476070 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476163 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476181 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476193 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476206 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476310 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476322 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476335 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476346 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476358 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476369 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476380 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476391 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476425 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476451 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476466 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476481 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476494 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476506 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476518 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476529 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476558 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476643 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476656 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476668 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476680 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476692 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476703 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476716 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476772 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476784 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476795 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476806 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476817 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.476830 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477019 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477031 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477058 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477070 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477081 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477093 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477120 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477132 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477144 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477155 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477184 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477196 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477207 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477218 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477229 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477240 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477252 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477283 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477364 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477390 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477403 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477420 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477436 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477453 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477467 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477479 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477493 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477505 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477517 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477530 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477541 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477554 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477565 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477622 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477633 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477643 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477654 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477664 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477675 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477714 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477726 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477738 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477750 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477761 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477775 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477785 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477796 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477807 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477819 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477831 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477844 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477856 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477867 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477878 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477889 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477900 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477911 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477922 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477934 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477944 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477956 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477968 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477979 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.477991 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478002 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478015 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478025 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478037 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478049 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478061 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478071 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478084 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478097 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478127 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478140 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478150 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478161 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478174 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478186 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478197 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478209 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478222 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478233 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478250 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478261 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478274 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478285 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478302 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478313 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478324 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478336 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478349 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478360 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478371 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478382 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478394 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478414 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478430 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478445 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478457 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478469 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478481 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478494 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478506 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478517 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478529 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478541 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478553 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478564 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478576 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478587 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478598 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.478610 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.480937 4995 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.480967 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.480981 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.480992 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.481004 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.481015 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.481027 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.481038 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.481049 4995 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.481094 4995 reconstruct.go:97] "Volume reconstruction finished" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.481119 4995 reconciler.go:26] "Reconciler: start to sync state" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.493372 4995 manager.go:324] Recovery completed Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.503217 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.504556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.504585 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.504592 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.505399 4995 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.505421 4995 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.505444 4995 state_mem.go:36] "Initialized new in-memory state store" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.513754 4995 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.515900 4995 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.515949 4995 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.515982 4995 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.516047 4995 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.516582 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.516653 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.524212 4995 policy_none.go:49] "None policy: Start" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.524809 4995 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.524840 4995 state_mem.go:35] "Initializing new in-memory state store" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.555857 4995 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.571393 4995 manager.go:334] "Starting Device Plugin manager" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.571462 4995 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.571478 4995 server.go:79] "Starting device plugin registration server" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.571977 4995 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.571992 4995 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.572429 4995 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.574216 4995 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.574245 4995 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.579365 4995 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.616544 4995 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.616635 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.617798 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.617821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.617829 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.617953 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.618888 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.618984 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.622283 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.622370 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.622398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.622878 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.623180 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.623267 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.623453 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.623490 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.623504 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.624591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.624619 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.624627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.624726 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.625176 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.625217 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626416 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626442 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626466 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626507 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626523 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626688 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.626954 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.627011 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628033 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628128 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628141 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628263 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628274 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628456 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.628479 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.629686 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.629715 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.629726 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.657979 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.673404 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.674973 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.675007 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.675017 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.675045 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.675812 4995 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682560 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682623 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682648 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682664 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682680 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682785 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682865 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682917 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.682964 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.683034 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.683074 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.683126 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.683153 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.683181 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.683210 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784669 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784753 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784787 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784821 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784850 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784878 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784907 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784938 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.784977 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785007 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785025 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785052 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785009 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785083 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785091 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785177 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785202 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785143 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785288 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785336 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785361 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785382 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785422 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785218 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785841 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.785994 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.786019 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.786049 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.786016 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.875960 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.877268 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.877310 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.877321 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.877352 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:16 crc kubenswrapper[4995]: E0126 23:08:16.878256 4995 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.952091 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.957673 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.977391 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: I0126 23:08:16.998447 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:16 crc kubenswrapper[4995]: W0126 23:08:16.999026 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-731a5ea88c23bf9e2092159a2e7ce15e153da09effcc7484002b4575cd835b58 WatchSource:0}: Error finding container 731a5ea88c23bf9e2092159a2e7ce15e153da09effcc7484002b4575cd835b58: Status 404 returned error can't find the container with id 731a5ea88c23bf9e2092159a2e7ce15e153da09effcc7484002b4575cd835b58 Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.004302 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:17 crc kubenswrapper[4995]: W0126 23:08:17.004972 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-239cb6f80b25662075cf2e115f80b0722b8f06837790196f0debcec8cb897ce9 WatchSource:0}: Error finding container 239cb6f80b25662075cf2e115f80b0722b8f06837790196f0debcec8cb897ce9: Status 404 returned error can't find the container with id 239cb6f80b25662075cf2e115f80b0722b8f06837790196f0debcec8cb897ce9 Jan 26 23:08:17 crc kubenswrapper[4995]: W0126 23:08:17.010055 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-2dc59dc8f85b0cef1a5077d2480b718c6588891028b64853f4b6f3342f9d1ee3 WatchSource:0}: Error finding container 2dc59dc8f85b0cef1a5077d2480b718c6588891028b64853f4b6f3342f9d1ee3: Status 404 returned error can't find the container with id 2dc59dc8f85b0cef1a5077d2480b718c6588891028b64853f4b6f3342f9d1ee3 Jan 26 23:08:17 crc kubenswrapper[4995]: W0126 23:08:17.023191 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-a2bc8b1131e55ed1d14ddc1300365c78653a439cf1c844955884736e033d3dd6 WatchSource:0}: Error finding container a2bc8b1131e55ed1d14ddc1300365c78653a439cf1c844955884736e033d3dd6: Status 404 returned error can't find the container with id a2bc8b1131e55ed1d14ddc1300365c78653a439cf1c844955884736e033d3dd6 Jan 26 23:08:17 crc kubenswrapper[4995]: W0126 23:08:17.024457 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6c3fd7f22683bd1ef4210b7d79bee12bfce44f99ae883a63e1b65ca9ff2c1d23 WatchSource:0}: Error finding container 6c3fd7f22683bd1ef4210b7d79bee12bfce44f99ae883a63e1b65ca9ff2c1d23: Status 404 returned error can't find the container with id 6c3fd7f22683bd1ef4210b7d79bee12bfce44f99ae883a63e1b65ca9ff2c1d23 Jan 26 23:08:17 crc kubenswrapper[4995]: E0126 23:08:17.058828 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Jan 26 23:08:17 crc kubenswrapper[4995]: W0126 23:08:17.271256 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:17 crc kubenswrapper[4995]: E0126 23:08:17.271334 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.278710 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.279684 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.279714 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.279722 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.279742 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:17 crc kubenswrapper[4995]: E0126 23:08:17.279957 4995 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.452262 4995 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.455439 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:51:03.373677683 +0000 UTC Jan 26 23:08:17 crc kubenswrapper[4995]: W0126 23:08:17.476052 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:17 crc kubenswrapper[4995]: E0126 23:08:17.476168 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.529005 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c3fd7f22683bd1ef4210b7d79bee12bfce44f99ae883a63e1b65ca9ff2c1d23"} Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.529938 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a2bc8b1131e55ed1d14ddc1300365c78653a439cf1c844955884736e033d3dd6"} Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.531044 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2dc59dc8f85b0cef1a5077d2480b718c6588891028b64853f4b6f3342f9d1ee3"} Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.532002 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"239cb6f80b25662075cf2e115f80b0722b8f06837790196f0debcec8cb897ce9"} Jan 26 23:08:17 crc kubenswrapper[4995]: I0126 23:08:17.533246 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"731a5ea88c23bf9e2092159a2e7ce15e153da09effcc7484002b4575cd835b58"} Jan 26 23:08:17 crc kubenswrapper[4995]: E0126 23:08:17.859966 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Jan 26 23:08:17 crc kubenswrapper[4995]: W0126 23:08:17.882981 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:17 crc kubenswrapper[4995]: E0126 23:08:17.883069 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.080525 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.083474 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.083528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.083538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.083566 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:18 crc kubenswrapper[4995]: E0126 23:08:18.084208 4995 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Jan 26 23:08:18 crc kubenswrapper[4995]: W0126 23:08:18.113939 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:18 crc kubenswrapper[4995]: E0126 23:08:18.114062 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.382498 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 23:08:18 crc kubenswrapper[4995]: E0126 23:08:18.383434 4995 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:18 crc kubenswrapper[4995]: E0126 23:08:18.384661 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e6a9615fbb324 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:08:16.449639204 +0000 UTC m=+0.614346709,LastTimestamp:2026-01-26 23:08:16.449639204 +0000 UTC m=+0.614346709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.452553 4995 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.455735 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:20:06.03457075 +0000 UTC Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.538798 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.538841 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.538851 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.538861 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.538882 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.539845 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.539880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.539889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.541600 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59" exitCode=0 Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.541716 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.541990 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.542407 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.542420 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.542428 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.543405 4995 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762" exitCode=0 Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.543438 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.543511 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.543940 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.544489 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.544508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.544515 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.544849 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.544873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.544912 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.545612 4995 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b4968022ac9ab52cfea33d3fccf8e070660139e224bba28dc4ade8a43c05bf46" exitCode=0 Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.545683 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.545926 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b4968022ac9ab52cfea33d3fccf8e070660139e224bba28dc4ade8a43c05bf46"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.546357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.546390 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.546400 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.548426 4995 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142" exitCode=0 Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.548453 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142"} Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.548548 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.549290 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.549318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:18 crc kubenswrapper[4995]: I0126 23:08:18.549327 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:19 crc kubenswrapper[4995]: W0126 23:08:19.133359 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:19 crc kubenswrapper[4995]: E0126 23:08:19.133448 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:19 crc kubenswrapper[4995]: W0126 23:08:19.168682 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:19 crc kubenswrapper[4995]: E0126 23:08:19.168763 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.451565 4995 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.456712 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 20:00:05.361140472 +0000 UTC Jan 26 23:08:19 crc kubenswrapper[4995]: E0126 23:08:19.461327 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.553615 4995 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6" exitCode=0 Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.553675 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.553735 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.554636 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.554661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.554670 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.555310 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1a55e11716925cce81c41c9f11fb000386beeb8b70e04254b605df03a4203004"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.555332 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.555953 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.555979 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.555989 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.558480 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.558512 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.558547 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.558613 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.559554 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.559579 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.559589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.564574 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.564609 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.564624 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.564633 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.564647 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.564636 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.564747 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3"} Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.565477 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.565476 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.565531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.565549 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.565550 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.565572 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.596305 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.596739 4995 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.596787 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.694218 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.695774 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.695812 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.695825 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:19 crc kubenswrapper[4995]: I0126 23:08:19.695852 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:19 crc kubenswrapper[4995]: E0126 23:08:19.696362 4995 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.164:6443: connect: connection refused" node="crc" Jan 26 23:08:19 crc kubenswrapper[4995]: W0126 23:08:19.918235 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.164:6443: connect: connection refused Jan 26 23:08:19 crc kubenswrapper[4995]: E0126 23:08:19.918323 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.164:6443: connect: connection refused" logger="UnhandledError" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.111569 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.457002 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 10:13:51.945236036 +0000 UTC Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.569210 4995 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e" exitCode=0 Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.569339 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.569399 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.569422 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.569438 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.569469 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.569777 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e"} Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.570012 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.570054 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571015 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571111 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571120 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571156 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571125 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571179 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571218 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:20 crc kubenswrapper[4995]: I0126 23:08:20.571235 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.381169 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.458009 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 01:02:53.686212107 +0000 UTC Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.529744 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.576226 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca"} Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.576285 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb"} Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.576301 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.576307 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7"} Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.576328 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb"} Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.577256 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.577294 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:21 crc kubenswrapper[4995]: I0126 23:08:21.577304 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.116427 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.116650 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.118325 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.118391 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.118406 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.459167 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:58:30.503833365 +0000 UTC Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.549614 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.582307 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.582885 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.583195 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b"} Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.583652 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.583681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.583691 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.584231 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.584255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.584265 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.896814 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.898231 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.898262 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.898270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:22 crc kubenswrapper[4995]: I0126 23:08:22.898289 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.109916 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.110236 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.111820 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.111868 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.111882 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.111677 4995 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.112287 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.459366 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:58:49.468457184 +0000 UTC Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.584074 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.585365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.585403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:23 crc kubenswrapper[4995]: I0126 23:08:23.585415 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:24 crc kubenswrapper[4995]: I0126 23:08:24.460332 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 17:05:01.962029393 +0000 UTC Jan 26 23:08:25 crc kubenswrapper[4995]: I0126 23:08:25.460675 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 17:05:10.239080045 +0000 UTC Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.054972 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.055258 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.056834 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.056903 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.056918 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.461192 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:54:57.464401925 +0000 UTC Jan 26 23:08:26 crc kubenswrapper[4995]: E0126 23:08:26.579541 4995 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.640897 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.641305 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.642721 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.642788 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.642808 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:26 crc kubenswrapper[4995]: I0126 23:08:26.648790 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:27 crc kubenswrapper[4995]: I0126 23:08:27.462301 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:51:54.619119735 +0000 UTC Jan 26 23:08:27 crc kubenswrapper[4995]: I0126 23:08:27.595331 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:27 crc kubenswrapper[4995]: I0126 23:08:27.597800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:27 crc kubenswrapper[4995]: I0126 23:08:27.597861 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:27 crc kubenswrapper[4995]: I0126 23:08:27.597879 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:27 crc kubenswrapper[4995]: I0126 23:08:27.601281 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.081794 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.357872 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.358088 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.359319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.359347 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.359356 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.463224 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:09:48.542045833 +0000 UTC Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.597676 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.599070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.599170 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:28 crc kubenswrapper[4995]: I0126 23:08:28.599195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:29 crc kubenswrapper[4995]: I0126 23:08:29.463609 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:28:53.820037388 +0000 UTC Jan 26 23:08:29 crc kubenswrapper[4995]: I0126 23:08:29.600035 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:29 crc kubenswrapper[4995]: I0126 23:08:29.600955 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:29 crc kubenswrapper[4995]: I0126 23:08:29.600996 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:29 crc kubenswrapper[4995]: I0126 23:08:29.601006 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.453249 4995 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.464612 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:49:36.233717875 +0000 UTC Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.603930 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.605853 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd" exitCode=255 Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.605911 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd"} Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.606167 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.607368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.607407 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.607424 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.608153 4995 scope.go:117] "RemoveContainer" containerID="f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd" Jan 26 23:08:30 crc kubenswrapper[4995]: W0126 23:08:30.613572 4995 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.613648 4995 trace.go:236] Trace[1921714764]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 23:08:20.611) (total time: 10001ms): Jan 26 23:08:30 crc kubenswrapper[4995]: Trace[1921714764]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (23:08:30.613) Jan 26 23:08:30 crc kubenswrapper[4995]: Trace[1921714764]: [10.00196472s] [10.00196472s] END Jan 26 23:08:30 crc kubenswrapper[4995]: E0126 23:08:30.613670 4995 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.981225 4995 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.981305 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.989258 4995 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 23:08:30 crc kubenswrapper[4995]: I0126 23:08:30.989335 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 23:08:31 crc kubenswrapper[4995]: I0126 23:08:31.465272 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:46:48.94506499 +0000 UTC Jan 26 23:08:31 crc kubenswrapper[4995]: I0126 23:08:31.611354 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 23:08:31 crc kubenswrapper[4995]: I0126 23:08:31.614265 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7"} Jan 26 23:08:31 crc kubenswrapper[4995]: I0126 23:08:31.614485 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:31 crc kubenswrapper[4995]: I0126 23:08:31.615729 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:31 crc kubenswrapper[4995]: I0126 23:08:31.615778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:31 crc kubenswrapper[4995]: I0126 23:08:31.615800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:32 crc kubenswrapper[4995]: I0126 23:08:32.465728 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 08:32:39.813456727 +0000 UTC Jan 26 23:08:33 crc kubenswrapper[4995]: I0126 23:08:33.112171 4995 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 23:08:33 crc kubenswrapper[4995]: I0126 23:08:33.112327 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 23:08:33 crc kubenswrapper[4995]: I0126 23:08:33.466294 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:22:51.861739289 +0000 UTC Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.466787 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 11:44:17.967616202 +0000 UTC Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.605610 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.605756 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.605842 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.606621 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.606675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.606699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.612371 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.620608 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.621468 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.621539 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:34 crc kubenswrapper[4995]: I0126 23:08:34.621564 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.467412 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:57:11.779609635 +0000 UTC Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.622811 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.623967 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.624001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.624013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:35 crc kubenswrapper[4995]: E0126 23:08:35.976703 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.979776 4995 trace.go:236] Trace[1754602208]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 23:08:23.069) (total time: 12909ms): Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[1754602208]: ---"Objects listed" error: 12909ms (23:08:35.979) Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[1754602208]: [12.909923606s] [12.909923606s] END Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.979817 4995 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.980539 4995 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 23:08:35 crc kubenswrapper[4995]: E0126 23:08:35.980797 4995 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.980909 4995 trace.go:236] Trace[170435702]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 23:08:25.104) (total time: 10876ms): Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[170435702]: ---"Objects listed" error: 10876ms (23:08:35.980) Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[170435702]: [10.876401557s] [10.876401557s] END Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.981386 4995 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.982736 4995 trace.go:236] Trace[1552439728]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 23:08:23.563) (total time: 12419ms): Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[1552439728]: ---"Objects listed" error: 12419ms (23:08:35.982) Jan 26 23:08:35 crc kubenswrapper[4995]: Trace[1552439728]: [12.419317108s] [12.419317108s] END Jan 26 23:08:35 crc kubenswrapper[4995]: I0126 23:08:35.982778 4995 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.007565 4995 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.026819 4995 csr.go:261] certificate signing request csr-xt48l is approved, waiting to be issued Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.047026 4995 csr.go:257] certificate signing request csr-xt48l is issued Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.312937 4995 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.313223 4995 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.313258 4995 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.313321 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.164:59338->38.102.83.164:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188e6a96374ef19f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:08:17.008742815 +0000 UTC m=+1.173450280,LastTimestamp:2026-01-26 23:08:17.008742815 +0000 UTC m=+1.173450280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.313438 4995 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.451495 4995 apiserver.go:52] "Watching apiserver" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.460752 4995 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.460912 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461277 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.461603 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461662 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.461684 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461719 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.461742 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.461778 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.462088 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.463785 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464059 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464485 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464507 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.464919 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.465294 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.466042 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.466731 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.467894 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:28:50.215358495 +0000 UTC Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.468054 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.556354 4995 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583501 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583544 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583567 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583684 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583740 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583768 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583791 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583821 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583852 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583882 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583911 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583925 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583939 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.583990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584025 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584059 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584088 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584133 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584163 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584178 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584200 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584233 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584289 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584317 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584339 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584365 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584391 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584418 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584441 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584490 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584511 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584539 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584565 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584620 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584645 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584675 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584701 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584723 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584750 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584775 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584809 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584832 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584860 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584889 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584912 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584938 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584949 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584964 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.584997 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585022 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585050 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585075 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585116 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585144 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585174 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585197 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585228 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585286 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585309 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585334 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585368 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585470 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585503 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585529 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585557 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585579 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585604 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585628 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585651 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585676 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585701 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585724 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585760 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585786 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585812 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585863 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585889 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585910 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585933 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585957 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.585984 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586008 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586034 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586063 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586120 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586149 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586490 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586524 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586776 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586876 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586932 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586964 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586998 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587030 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587053 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587057 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587299 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587325 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587349 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587369 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587391 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587485 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587505 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587512 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587527 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587551 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587573 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587594 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587616 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587638 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587656 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587677 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587698 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587714 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587751 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587792 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587812 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587831 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587850 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587874 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587894 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587914 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587934 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587953 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587971 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587991 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588013 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588034 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588053 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588072 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588092 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588125 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588142 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588164 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588204 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588223 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588242 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588261 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588281 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588320 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588338 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588368 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588391 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588409 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588444 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588462 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588481 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594995 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596706 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596756 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596791 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598146 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.587829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588035 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588182 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588405 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588850 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.588951 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.589913 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.591483 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594500 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594752 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594781 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594808 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.586749 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.594939 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595211 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595441 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595435 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595537 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595690 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595691 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595765 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.595911 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596148 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596512 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.596789 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597071 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597080 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597096 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597228 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597318 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597591 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597621 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.597828 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598059 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598142 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598239 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.598647 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.601187 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.601271 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.101229808 +0000 UTC m=+21.265937273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.601402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.601213 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602366 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602628 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602732 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602918 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603061 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603138 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603163 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603355 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603382 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603438 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603484 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603529 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603553 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603574 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603613 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603635 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603658 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603691 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603714 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603751 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603770 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603790 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603809 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603840 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603860 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603888 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603926 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603945 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603967 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604002 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604019 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604039 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604070 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604092 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.604918 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606626 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606830 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606878 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606908 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606937 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606963 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607017 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607041 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607067 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607087 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607122 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607150 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607169 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607186 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607208 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607228 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607245 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607262 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607298 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607374 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607387 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607400 4995 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607411 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607421 4995 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607431 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607441 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607450 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607461 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607471 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607482 4995 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607492 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607500 4995 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607510 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607519 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607530 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607541 4995 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607550 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607560 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607569 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607578 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607586 4995 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607596 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607605 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607615 4995 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607625 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607634 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607644 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607653 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607662 4995 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607671 4995 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607682 4995 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607692 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607702 4995 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607712 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607721 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607731 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607741 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607750 4995 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607761 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607771 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607780 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607789 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607799 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607808 4995 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607818 4995 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607827 4995 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607844 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607853 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607862 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607871 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609259 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609395 4995 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611034 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611766 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602517 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.602727 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603023 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603209 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.603454 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606464 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.606615 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607265 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607426 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607528 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607690 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.607963 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608031 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616371 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.608227 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608253 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608327 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608343 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608350 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608506 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608602 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.608894 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609114 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609420 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609780 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.609896 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610053 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610011 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610190 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610390 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610447 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610648 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610667 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.610830 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611117 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611246 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611638 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611702 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611761 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.611779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612357 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612407 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612664 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612793 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612808 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612920 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.612991 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613638 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613778 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.613909 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.613991 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.614335 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.614669 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.614811 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.615181 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.615517 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.615698 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616117 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616676 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.616714 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.116693544 +0000 UTC m=+21.281401009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.616788 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.616874 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.116867189 +0000 UTC m=+21.281574654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617179 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617185 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617243 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617881 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.617977 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.621214 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.623283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624321 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624687 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624830 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.624930 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.625448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.625771 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.626787 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627310 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627408 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627654 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.627989 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.629004 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.629258 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629380 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629403 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629418 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.629482 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.129462677 +0000 UTC m=+21.294170142 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.629714 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630002 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630053 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630052 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630354 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630608 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630817 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630926 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.630996 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631148 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631332 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631456 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631766 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631967 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.631997 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632117 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632260 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632279 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632302 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632515 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632584 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632586 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.632772 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.632973 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.633010 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.633029 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.633159 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:37.133067403 +0000 UTC m=+21.297775058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633407 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633554 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633361 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.633928 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634091 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634084 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634314 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634388 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638024 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.634886 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.635216 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.635570 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.635887 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.636045 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.636290 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638424 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.636487 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.637283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.637280 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.637593 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638438 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.638722 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639234 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639298 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639553 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.639967 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.642507 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.643305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.643401 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644144 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644404 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644491 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644681 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644699 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.644965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.645037 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.645168 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.645732 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646059 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646521 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" exitCode=255 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646565 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7"} Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.646617 4995 scope.go:117] "RemoveContainer" containerID="f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.650698 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.655378 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.662396 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:36 crc kubenswrapper[4995]: E0126 23:08:36.662678 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.662987 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.664944 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.667305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.667304 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.675026 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.679809 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.680137 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.691170 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.700164 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709074 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709180 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709236 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709309 4995 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709325 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709339 4995 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709350 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709360 4995 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709371 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709382 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709393 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709402 4995 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709414 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709424 4995 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709435 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709445 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709454 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709463 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709472 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709321 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709483 4995 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709533 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709548 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709562 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709577 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709592 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709606 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709619 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709631 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709643 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709656 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709669 4995 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709680 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709691 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709703 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709714 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709728 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709739 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709750 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709762 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709774 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709786 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709797 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709808 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709820 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709832 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709843 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709855 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709870 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709882 4995 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709896 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709908 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709920 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709933 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709945 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709958 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709970 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709982 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.709996 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710007 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710019 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710030 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710042 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710052 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710064 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710079 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710091 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710893 4995 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710910 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710925 4995 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710938 4995 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710949 4995 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710962 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710067 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.710973 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711066 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711078 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711092 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711116 4995 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711130 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711144 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711153 4995 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711162 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711172 4995 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711181 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711191 4995 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711200 4995 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711214 4995 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711226 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711237 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711247 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711260 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711271 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711281 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711289 4995 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711299 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711310 4995 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711320 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711329 4995 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711338 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711346 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711355 4995 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711363 4995 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711372 4995 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711381 4995 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711390 4995 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711398 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711407 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711415 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711423 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711432 4995 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711441 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711450 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711458 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711467 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711476 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711485 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711493 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711504 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711513 4995 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711521 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711531 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711539 4995 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711547 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711556 4995 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711564 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711573 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711583 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711591 4995 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711603 4995 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711612 4995 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711621 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711629 4995 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711638 4995 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711648 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711657 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711665 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711674 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711685 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711694 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711704 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711713 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711724 4995 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711735 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711762 4995 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711772 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711781 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.711790 4995 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.719390 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.729127 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.738664 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.751229 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.759464 4995 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.767839 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:30Z\\\",\\\"message\\\":\\\"W0126 23:08:19.601263 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 23:08:19.601594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769468899 cert, and key in /tmp/serving-cert-4050516234/serving-signer.crt, /tmp/serving-cert-4050516234/serving-signer.key\\\\nI0126 23:08:19.891540 1 observer_polling.go:159] Starting file observer\\\\nW0126 23:08:19.898437 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 23:08:19.898613 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 23:08:19.902820 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4050516234/tls.crt::/tmp/serving-cert-4050516234/tls.key\\\\\\\"\\\\nF0126 23:08:30.280915 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.775424 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.777598 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.789277 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85 WatchSource:0}: Error finding container 5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85: Status 404 returned error can't find the container with id 5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.789966 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.793023 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.800001 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.804967 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: W0126 23:08:36.810730 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5 WatchSource:0}: Error finding container d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5: Status 404 returned error can't find the container with id d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5 Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.817635 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.831626 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.841524 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.858533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:36 crc kubenswrapper[4995]: I0126 23:08:36.870372 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:30Z\\\",\\\"message\\\":\\\"W0126 23:08:19.601263 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 23:08:19.601594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769468899 cert, and key in /tmp/serving-cert-4050516234/serving-signer.crt, /tmp/serving-cert-4050516234/serving-signer.key\\\\nI0126 23:08:19.891540 1 observer_polling.go:159] Starting file observer\\\\nW0126 23:08:19.898437 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 23:08:19.898613 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 23:08:19.902820 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4050516234/tls.crt::/tmp/serving-cert-4050516234/tls.key\\\\\\\"\\\\nF0126 23:08:30.280915 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.048556 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 23:03:36 +0000 UTC, rotation deadline is 2026-12-18 01:23:28.544984682 +0000 UTC Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.048628 4995 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7802h14m51.496359722s for next certificate rotation Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.115164 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.115320 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.115296911 +0000 UTC m=+22.280004376 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.215951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.216025 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.216054 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.216119 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216165 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216190 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216197 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216203 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216243 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216252 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216233215 +0000 UTC m=+22.380940680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216365 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216351868 +0000 UTC m=+22.381059333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216382 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216372929 +0000 UTC m=+22.381080404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216494 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216570 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216589 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.216699 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:38.216665816 +0000 UTC m=+22.381373481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.468303 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 09:17:42.46877918 +0000 UTC Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.650286 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"3cf15b92960d60889cb4e79030289e7f6c110c85abee044dbc223e964c6749dc"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.652335 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.652367 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.652379 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d678310b177bb99398619a51da0ed4605202169e8d1f25688e5730c25d022ea5"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.654243 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.654268 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5c12f86f42931ccc3c7576c91d0d994f756d10e1e5d4b3f810a8642e430dec85"} Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.656114 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.657993 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:37 crc kubenswrapper[4995]: E0126 23:08:37.658136 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.701141 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.740376 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.765582 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.781425 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.796498 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.811661 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4fe19d7a699e1baf501eb85ad819135c0703d5f5a1c7f270a1ca5f4092131fd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:30Z\\\",\\\"message\\\":\\\"W0126 23:08:19.601263 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0126 23:08:19.601594 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769468899 cert, and key in /tmp/serving-cert-4050516234/serving-signer.crt, /tmp/serving-cert-4050516234/serving-signer.key\\\\nI0126 23:08:19.891540 1 observer_polling.go:159] Starting file observer\\\\nW0126 23:08:19.898437 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0126 23:08:19.898613 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 23:08:19.902820 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4050516234/tls.crt::/tmp/serving-cert-4050516234/tls.key\\\\\\\"\\\\nF0126 23:08:30.280915 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.824492 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.842255 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.858537 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.870471 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.889911 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.906285 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.920920 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:37 crc kubenswrapper[4995]: I0126 23:08:37.937271 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:37Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.124437 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.124598 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.124584121 +0000 UTC m=+24.289291586 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145315 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-m8zlz"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145591 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hln88"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145723 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pkt82"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145735 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.145880 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.146310 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.146734 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-sj7pr"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.147203 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.153450 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.155993 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156087 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156436 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156661 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156753 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156834 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.156945 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157029 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157160 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157179 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157062 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157308 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157320 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.157961 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.187181 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.207356 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.223565 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-netns\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224863 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-os-release\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224877 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-system-cni-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224894 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-os-release\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224912 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-conf-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224933 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-bin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224948 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clj2d\" (UniqueName: \"kubernetes.io/projected/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-kube-api-access-clj2d\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224961 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224974 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-multus\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.224991 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-daemon-config\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225007 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-etc-kubernetes\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225028 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225140 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rmfp\" (UniqueName: \"kubernetes.io/projected/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-kube-api-access-7rmfp\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225163 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225172 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225201 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225215 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225259 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.225243679 +0000 UTC m=+24.389951144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225259 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225301 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.22528937 +0000 UTC m=+24.389996835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225180 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-multus-certs\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225326 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225341 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cni-binary-copy\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225364 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225382 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-hostroot\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225397 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25pf7\" (UniqueName: \"kubernetes.io/projected/4ba70657-ea12-4a85-9ec3-c1423b5b6912-kube-api-access-25pf7\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225412 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884rn\" (UniqueName: \"kubernetes.io/projected/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-kube-api-access-884rn\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225426 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-k8s-cni-cncf-io\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225440 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-kubelet\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225454 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-mcd-auth-proxy-config\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225474 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-hosts-file\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225485 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225513 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225524 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225573 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.225552716 +0000 UTC m=+24.390260181 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225493 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-system-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225635 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-proxy-tls\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225654 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-socket-dir-parent\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225676 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225692 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-rootfs\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225724 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cnibin\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225742 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cnibin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.225778 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225821 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.225849 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:40.225839853 +0000 UTC m=+24.390547318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.235500 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.244913 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.257650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.267941 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.276392 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.287346 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.299041 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.311984 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.324650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327007 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327188 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cni-binary-copy\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327292 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-hostroot\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327399 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25pf7\" (UniqueName: \"kubernetes.io/projected/4ba70657-ea12-4a85-9ec3-c1423b5b6912-kube-api-access-25pf7\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327494 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-884rn\" (UniqueName: \"kubernetes.io/projected/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-kube-api-access-884rn\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327582 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-k8s-cni-cncf-io\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327670 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-hosts-file\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327421 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-hostroot\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327811 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-k8s-cni-cncf-io\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327808 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327852 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-hosts-file\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327923 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cni-binary-copy\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327944 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-system-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.327767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-system-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328168 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-kubelet\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328259 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-mcd-auth-proxy-config\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328361 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328449 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-proxy-tls\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328553 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328635 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-socket-dir-parent\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328717 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-rootfs\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328802 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cnibin\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328899 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cnibin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-netns\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329127 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-os-release\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329228 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-system-cni-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329322 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-os-release\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329429 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-conf-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329531 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-netns\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328848 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-mcd-auth-proxy-config\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329464 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cnibin\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329487 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-rootfs\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329261 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-binary-copy\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329510 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-system-cni-dir\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328912 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329610 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-os-release\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329292 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-cnibin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.328262 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-kubelet\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329672 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-conf-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329672 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-os-release\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329454 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-socket-dir-parent\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329773 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-bin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.329538 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-bin\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330269 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clj2d\" (UniqueName: \"kubernetes.io/projected/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-kube-api-access-clj2d\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330385 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rmfp\" (UniqueName: \"kubernetes.io/projected/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-kube-api-access-7rmfp\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330479 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330568 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-multus\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330640 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-var-lib-cni-multus\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330598 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-cni-dir\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330825 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-daemon-config\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.330922 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-etc-kubernetes\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331021 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-multus-certs\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331171 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-host-run-multus-certs\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331287 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4ba70657-ea12-4a85-9ec3-c1423b5b6912-etc-kubernetes\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.331339 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4ba70657-ea12-4a85-9ec3-c1423b5b6912-multus-daemon-config\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.332624 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-proxy-tls\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.337716 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.344066 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25pf7\" (UniqueName: \"kubernetes.io/projected/4ba70657-ea12-4a85-9ec3-c1423b5b6912-kube-api-access-25pf7\") pod \"multus-hln88\" (UID: \"4ba70657-ea12-4a85-9ec3-c1423b5b6912\") " pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.347471 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rmfp\" (UniqueName: \"kubernetes.io/projected/b7acc40a-3d17-4c4f-8300-2fa8c89564a9-kube-api-access-7rmfp\") pod \"multus-additional-cni-plugins-pkt82\" (UID: \"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\") " pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.349057 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-884rn\" (UniqueName: \"kubernetes.io/projected/15f852ca-fb3b-4ad2-836a-d0dbe735dde4-kube-api-access-884rn\") pod \"node-resolver-m8zlz\" (UID: \"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\") " pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.349152 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clj2d\" (UniqueName: \"kubernetes.io/projected/09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4-kube-api-access-clj2d\") pod \"machine-config-daemon-sj7pr\" (UID: \"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\") " pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.354249 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.372203 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.381434 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.385988 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.391435 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.393591 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.401874 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.419264 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.439384 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.460010 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.462836 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-m8zlz" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.468906 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:17:32.990702686 +0000 UTC Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.471407 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pkt82" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.473166 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: W0126 23:08:38.478572 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15f852ca_fb3b_4ad2_836a_d0dbe735dde4.slice/crio-5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f WatchSource:0}: Error finding container 5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f: Status 404 returned error can't find the container with id 5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.483114 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hln88" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.490504 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.493931 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: W0126 23:08:38.496583 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7acc40a_3d17_4c4f_8300_2fa8c89564a9.slice/crio-ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2 WatchSource:0}: Error finding container ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2: Status 404 returned error can't find the container with id ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2 Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.516280 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.516301 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.516407 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.516467 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.516525 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:38 crc kubenswrapper[4995]: E0126 23:08:38.516852 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.521232 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.521370 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.521891 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.523420 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.524142 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.525240 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.525883 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.526651 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.528492 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.529440 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.531759 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.532390 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.533830 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.534440 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.536671 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.537427 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.538111 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.539302 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.539793 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.541518 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.542500 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.543142 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.544791 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.545662 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.547408 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.547975 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.553787 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.554777 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.554899 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.555424 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.556703 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.557547 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.558755 4995 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.558874 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.561159 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.562058 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.563414 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.565239 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.566231 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.567472 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.568266 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.570089 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.572314 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.573034 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.574136 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.574738 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.575882 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.576339 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.577279 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.577766 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.578830 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.579332 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.580169 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.582456 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.583297 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.583919 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.584592 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.585036 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.589633 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.591699 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.591915 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.591955 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.592223 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.592446 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.594586 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.594739 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.609557 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.632395 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.645607 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.662051 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.665241 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"d72fe382310a4aad8215c99e864bc042e6eccd79c55b7cfb2bf698a1d63951d8"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.667590 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8zlz" event={"ID":"15f852ca-fb3b-4ad2-836a-d0dbe735dde4","Type":"ContainerStarted","Data":"5411a42de63204581cd09f5268bbda31765aeba3655837714630d122899a832f"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.668637 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerStarted","Data":"ac1fd80269501dfce5a077c39101995937ef8765c5f3e38b83deb0442d5dc4a2"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.671370 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"d4d65edfef32fd1663a349c7d8d4c958f5f32a84fb38e5a093ecf4fa0d17a6b2"} Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.678863 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.693638 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.706748 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.728191 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735063 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735121 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735192 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735226 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735243 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735262 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735277 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735300 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735328 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735342 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735356 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735371 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735398 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735415 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735429 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735456 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735470 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735590 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.735614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.748661 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.763155 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.773884 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.788608 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.807581 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.827072 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836401 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836477 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836504 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836530 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836531 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836608 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836632 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836650 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836629 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836571 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836664 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836496 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836727 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836751 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836789 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836898 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836909 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836823 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836940 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.836967 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837031 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837049 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837073 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837090 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837129 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837131 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837110 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837194 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837197 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837208 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837239 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837153 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837225 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837443 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.837797 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.844996 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.847083 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.857352 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"ovnkube-node-l9xmp\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.857493 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.870620 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.879853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.891658 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:38Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:38 crc kubenswrapper[4995]: I0126 23:08:38.940957 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:38 crc kubenswrapper[4995]: W0126 23:08:38.953148 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe4486f1_6ac2_4655_aff8_634049c9aa6c.slice/crio-0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52 WatchSource:0}: Error finding container 0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52: Status 404 returned error can't find the container with id 0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52 Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.469960 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:15:53.667071392 +0000 UTC Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.677694 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.677741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.680519 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.681894 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-m8zlz" event={"ID":"15f852ca-fb3b-4ad2-836a-d0dbe735dde4","Type":"ContainerStarted","Data":"006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.683166 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" exitCode=0 Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.683246 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.683292 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.684993 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03" exitCode=0 Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.685143 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.686327 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0"} Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.693853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.708985 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.733984 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.751897 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.767844 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.782037 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.793297 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.804645 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.816270 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.825591 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.836557 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.855754 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.867284 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.878949 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.894827 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.913681 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.928451 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.939709 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.953089 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.966317 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.980650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:39 crc kubenswrapper[4995]: I0126 23:08:39.993389 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:39Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.003988 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.027603 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.090673 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.103419 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.115190 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.118748 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.125336 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.128952 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.145861 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.152999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.153155 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.153141517 +0000 UTC m=+28.317848982 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.160671 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.173522 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.188674 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.206933 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.221291 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.235284 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.250029 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254253 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254287 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.254314 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254416 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254431 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254441 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254477 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.254464461 +0000 UTC m=+28.419171926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254731 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254760 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.254751808 +0000 UTC m=+28.419459273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254812 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254824 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254833 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254858 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.25484996 +0000 UTC m=+28.419557425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254910 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.254935 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:44.254928062 +0000 UTC m=+28.419635527 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.273269 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.286249 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.298430 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.311376 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.327188 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.344153 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.358088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.378822 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.391267 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.423161 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.462000 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.471285 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:30:16.489038997 +0000 UTC Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.501664 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.516316 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.516354 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.516458 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.516508 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.516639 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:40 crc kubenswrapper[4995]: E0126 23:08:40.516701 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.542682 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.582461 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.620303 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.660196 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.691777 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9" exitCode=0 Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.691857 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697669 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697720 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697733 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697745 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.697766 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.706777 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.752230 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.782654 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.787094 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xltwc"] Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.787525 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.814025 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.833829 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.853453 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.858866 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d39f52ec-0319-4f38-b9f5-7f472d8006c5-serviceca\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.858919 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwzch\" (UniqueName: \"kubernetes.io/projected/d39f52ec-0319-4f38-b9f5-7f472d8006c5-kube-api-access-vwzch\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.858952 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d39f52ec-0319-4f38-b9f5-7f472d8006c5-host\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.874306 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.899730 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.948051 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.959917 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d39f52ec-0319-4f38-b9f5-7f472d8006c5-serviceca\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.959983 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwzch\" (UniqueName: \"kubernetes.io/projected/d39f52ec-0319-4f38-b9f5-7f472d8006c5-kube-api-access-vwzch\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.960007 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d39f52ec-0319-4f38-b9f5-7f472d8006c5-host\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.960062 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d39f52ec-0319-4f38-b9f5-7f472d8006c5-host\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.960910 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d39f52ec-0319-4f38-b9f5-7f472d8006c5-serviceca\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:40 crc kubenswrapper[4995]: I0126 23:08:40.981538 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:40Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.013125 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwzch\" (UniqueName: \"kubernetes.io/projected/d39f52ec-0319-4f38-b9f5-7f472d8006c5-kube-api-access-vwzch\") pod \"node-ca-xltwc\" (UID: \"d39f52ec-0319-4f38-b9f5-7f472d8006c5\") " pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.043199 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.086185 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.100450 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xltwc" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.124088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.168272 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.203610 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.243157 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.284276 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.322505 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.363033 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.401768 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.442724 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.473752 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 00:03:50.643100995 +0000 UTC Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.480895 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.520660 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.572736 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.629086 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.647552 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.680907 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.702593 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f" exitCode=0 Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.702671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f"} Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.704241 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xltwc" event={"ID":"d39f52ec-0319-4f38-b9f5-7f472d8006c5","Type":"ContainerStarted","Data":"54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae"} Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.704290 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xltwc" event={"ID":"d39f52ec-0319-4f38-b9f5-7f472d8006c5","Type":"ContainerStarted","Data":"36b11a7b7bc03340e54279e0f1324786df133dfe94d417d87f36296366d15d3b"} Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.724634 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.763477 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.804621 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.841997 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.881521 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.927655 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:41 crc kubenswrapper[4995]: I0126 23:08:41.960075 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:41Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.003636 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.044322 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.078596 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.127608 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.161021 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.210371 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.252401 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.291040 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.325003 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.363301 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.381388 4995 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383015 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383046 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.383168 4995 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.403257 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.455336 4995 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.455627 4995 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456888 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456943 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.456955 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.474278 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 20:47:15.711049155 +0000 UTC Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.474764 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.478947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.482373 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.492830 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496514 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496572 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.496582 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.512509 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516428 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516444 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516469 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516483 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516551 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516624 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.516643 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.516730 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.516851 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.517022 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.522766 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.533775 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537258 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537275 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.537285 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.549481 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: E0126 23:08:42.549601 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551144 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551178 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551208 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.551219 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.567396 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.601467 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.642009 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653272 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653310 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653323 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.653332 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.712185 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89" exitCode=0 Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.712225 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.729760 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.747938 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755086 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755174 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755231 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.755242 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.762429 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.801498 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.841516 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857177 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.857200 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.880584 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.920470 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959444 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.959486 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:42Z","lastTransitionTime":"2026-01-26T23:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:42 crc kubenswrapper[4995]: I0126 23:08:42.961768 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:42Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.007774 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.043501 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062268 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062332 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.062364 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.082223 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.125693 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165364 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.165383 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.169729 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.208878 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.243046 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267897 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267937 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267949 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.267958 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371563 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.371578 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.425232 4995 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474165 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474202 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474220 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474231 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.474428 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:05:32.449251203 +0000 UTC Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576646 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576658 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.576666 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678959 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678974 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.678993 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.679007 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.725984 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.728550 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a" exitCode=0 Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.728584 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.745419 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.761206 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.775175 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.775773 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:43 crc kubenswrapper[4995]: E0126 23:08:43.775911 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784734 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784744 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784757 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.784766 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.785992 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.799001 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.831602 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.842733 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.853689 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.863926 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.871874 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888658 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888721 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.888731 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.891540 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.903633 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.922040 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.933802 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.947995 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.959818 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:43Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990477 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:43 crc kubenswrapper[4995]: I0126 23:08:43.990506 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:43Z","lastTransitionTime":"2026-01-26T23:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093139 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093154 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.093166 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.191793 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.192007 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.191974877 +0000 UTC m=+36.356682352 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196063 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196206 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196241 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.196295 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.292767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.292999 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.293133 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.293257 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.292939 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293315 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293330 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293378 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293361532 +0000 UTC m=+36.458069007 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293064 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293198 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293710 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293760 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293611 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293565217 +0000 UTC m=+36.458272712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293838 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293876 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293841873 +0000 UTC m=+36.458549378 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.293924 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.293906225 +0000 UTC m=+36.458613820 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299385 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299420 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299431 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.299468 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.402857 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.402908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.403200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.403250 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.403265 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.474733 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 05:36:40.328450127 +0000 UTC Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.505968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506008 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.506059 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.519339 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.519651 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.520087 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.520328 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.520200 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:44 crc kubenswrapper[4995]: E0126 23:08:44.520425 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608640 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608658 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608677 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.608689 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711411 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711438 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.711450 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.735012 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7acc40a-3d17-4c4f-8300-2fa8c89564a9" containerID="f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26" exitCode=0 Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.735055 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerDied","Data":"f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.748510 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.762031 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.794060 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813031 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813743 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813772 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813794 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.813804 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.832457 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.847266 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.864746 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.878415 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.893581 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.906295 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916773 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916806 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.916848 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:44Z","lastTransitionTime":"2026-01-26T23:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.917041 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.927772 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.946287 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.957995 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:44 crc kubenswrapper[4995]: I0126 23:08:44.967059 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:44Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.018976 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.019050 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121695 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121755 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121773 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.121786 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223691 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223701 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223716 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.223727 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326575 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326609 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326620 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.326647 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429329 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429342 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.429351 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.475559 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 21:21:20.181738107 +0000 UTC Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532292 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532389 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.532404 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635366 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635384 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635408 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.635426 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737955 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737973 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.737997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.738014 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.744305 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.745749 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.745782 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.748722 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" event={"ID":"b7acc40a-3d17-4c4f-8300-2fa8c89564a9","Type":"ContainerStarted","Data":"4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.763903 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.787810 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.788352 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.788654 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.802225 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.814128 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.828731 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.839980 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840157 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840265 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840486 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.840649 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.856266 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.869609 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.903129 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.920778 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943280 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.943318 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:45Z","lastTransitionTime":"2026-01-26T23:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.955985 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:45 crc kubenswrapper[4995]: I0126 23:08:45.984991 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:45Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.002865 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.019592 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.032290 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045738 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045801 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045865 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.045921 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.053115 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.067137 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.077942 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.092230 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.105357 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.116902 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.134979 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148010 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148088 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.148847 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.173983 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.187166 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.203538 4995 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.205067 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.218353 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.235470 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.248250 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250818 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250834 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.250845 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.267778 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353045 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353087 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353120 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353137 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.353146 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455668 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455678 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.455704 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.476063 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:30:31.446655507 +0000 UTC Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.516409 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.516499 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.516525 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:46 crc kubenswrapper[4995]: E0126 23:08:46.516672 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:46 crc kubenswrapper[4995]: E0126 23:08:46.516756 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:46 crc kubenswrapper[4995]: E0126 23:08:46.516872 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.533801 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.551495 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558310 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558379 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558432 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.558452 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.567143 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.604151 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.616941 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.632445 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.648384 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.659421 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660861 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.660937 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.676717 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.690064 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.699651 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.717964 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.730595 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.745153 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.751367 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.755444 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:46Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765437 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765623 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765645 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765659 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.765669 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868951 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868960 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868972 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.868982 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972768 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972861 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:46 crc kubenswrapper[4995]: I0126 23:08:46.972902 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:46Z","lastTransitionTime":"2026-01-26T23:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076235 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076257 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076284 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.076306 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.178942 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.178979 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.178990 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.179004 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.179014 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281980 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.281997 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384616 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.384744 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.476742 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:41:07.161277231 +0000 UTC Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487309 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487347 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.487379 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589387 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589450 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589464 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589487 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.589500 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.691947 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692327 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692343 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.692357 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.753742 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795256 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.795281 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.897905 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898088 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898252 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:47 crc kubenswrapper[4995]: I0126 23:08:47.898309 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:47Z","lastTransitionTime":"2026-01-26T23:08:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000861 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000940 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.000952 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103741 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.103788 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.206932 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.206978 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.206989 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.207005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.207017 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309288 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309359 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.309368 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.413964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414031 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414051 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414080 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.414131 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.477498 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 03:55:37.611092392 +0000 UTC Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.516317 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.516352 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:48 crc kubenswrapper[4995]: E0126 23:08:48.516471 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.516321 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:48 crc kubenswrapper[4995]: E0126 23:08:48.516684 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:48 crc kubenswrapper[4995]: E0126 23:08:48.516874 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517440 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517542 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.517585 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.620778 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723689 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723808 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.723829 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.759205 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/0.log" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.762728 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46" exitCode=1 Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.762787 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.763599 4995 scope.go:117] "RemoveContainer" containerID="782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.803216 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.818448 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830414 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830432 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830458 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.830478 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.833738 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.845889 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.860657 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.877975 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.900761 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.922763 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933593 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933603 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933619 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.933631 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:48Z","lastTransitionTime":"2026-01-26T23:08:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.937887 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.955046 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.965825 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.977782 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:48 crc kubenswrapper[4995]: I0126 23:08:48.988313 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:48Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.002281 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.011017 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036055 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.036064 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.087308 4995 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.137928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138204 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138334 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.138402 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240913 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240972 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.240998 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343056 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343117 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343133 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343154 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.343166 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445493 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.445952 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.478563 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 20:13:26.949519758 +0000 UTC Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.547682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548006 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.548050 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650577 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.650619 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.695935 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7"] Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.696438 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.698458 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.698563 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.710449 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.722413 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.732202 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.741562 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753277 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.753319 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.754774 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.765811 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.767926 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.773845 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/0.log" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.777838 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" exitCode=1 Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.777880 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.777916 4995 scope.go:117] "RemoveContainer" containerID="782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.780476 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:08:49 crc kubenswrapper[4995]: E0126 23:08:49.780657 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.786254 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.799042 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.818523 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.832891 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.844933 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.854999 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.855336 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.855518 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.855716 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wp4c\" (UniqueName: \"kubernetes.io/projected/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-kube-api-access-5wp4c\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856146 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856177 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.856189 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.858638 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.872388 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.886467 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.901427 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.923908 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.944380 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957444 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957585 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.957610 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wp4c\" (UniqueName: \"kubernetes.io/projected/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-kube-api-access-5wp4c\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958734 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958746 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.958775 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:49Z","lastTransitionTime":"2026-01-26T23:08:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.959333 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.959947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.960449 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.964191 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.970533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.973758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wp4c\" (UniqueName: \"kubernetes.io/projected/1ef1196f-dfec-4c45-9abc-0cd1df4bc941-kube-api-access-5wp4c\") pod \"ovnkube-control-plane-749d76644c-2rkl7\" (UID: \"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.980572 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:49 crc kubenswrapper[4995]: I0126 23:08:49.998231 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.009465 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.013208 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: W0126 23:08:50.019906 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef1196f_dfec_4c45_9abc_0cd1df4bc941.slice/crio-0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433 WatchSource:0}: Error finding container 0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433: Status 404 returned error can't find the container with id 0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433 Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.026121 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.036168 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.051451 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060585 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.060595 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.063431 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.076427 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.087231 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.096962 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.110303 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.119064 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.128021 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163444 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163470 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.163481 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266308 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266366 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266385 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.266425 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369215 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369308 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369333 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.369355 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472578 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472637 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472655 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.472698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.479171 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 01:48:52.144118087 +0000 UTC Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.517064 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.517174 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.517180 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:50 crc kubenswrapper[4995]: E0126 23:08:50.517312 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:50 crc kubenswrapper[4995]: E0126 23:08:50.517487 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:50 crc kubenswrapper[4995]: E0126 23:08:50.517777 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.576780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.576886 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.576964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.577040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.577071 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679549 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.679602 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782605 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782646 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.782696 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.786379 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" event={"ID":"1ef1196f-dfec-4c45-9abc-0cd1df4bc941","Type":"ContainerStarted","Data":"ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.786421 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" event={"ID":"1ef1196f-dfec-4c45-9abc-0cd1df4bc941","Type":"ContainerStarted","Data":"70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.786437 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" event={"ID":"1ef1196f-dfec-4c45-9abc-0cd1df4bc941","Type":"ContainerStarted","Data":"0e3221e1deef768a8588852e9b0183ef6509c5b31a01ce8661f7860e3ed67433"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.789580 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.806405 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.818896 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.834415 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.860537 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.876294 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.885588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.885926 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.886135 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.886350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.886530 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.893713 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.905497 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.918882 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.933812 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.947347 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.960374 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.982410 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989524 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989570 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989584 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.989594 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:50Z","lastTransitionTime":"2026-01-26T23:08:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:50 crc kubenswrapper[4995]: I0126 23:08:50.995415 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:50Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.004153 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.013866 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.029091 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091772 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091783 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091796 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.091807 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.193957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194003 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194014 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194030 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.194042 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296174 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296217 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.296242 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399413 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399490 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399517 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.399533 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.479731 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:44:57.429022815 +0000 UTC Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502342 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502417 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.502499 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.530243 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-vlmfg"] Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.530930 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: E0126 23:08:51.531042 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.567145 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.581486 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xtg8\" (UniqueName: \"kubernetes.io/projected/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-kube-api-access-5xtg8\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.581536 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.588331 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605259 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605320 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605363 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.605382 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.608785 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.628323 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.644969 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.664603 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.682211 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xtg8\" (UniqueName: \"kubernetes.io/projected/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-kube-api-access-5xtg8\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.682327 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: E0126 23:08:51.682494 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:51 crc kubenswrapper[4995]: E0126 23:08:51.682611 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:52.182579991 +0000 UTC m=+36.347287486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.688465 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.705722 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xtg8\" (UniqueName: \"kubernetes.io/projected/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-kube-api-access-5xtg8\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708888 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708935 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.708954 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.714766 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.731404 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.763121 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.800898 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811961 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811974 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.811991 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.812003 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.817268 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.828832 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.846442 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.858862 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.867227 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.877383 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:51Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915000 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915054 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915067 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915085 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:51 crc kubenswrapper[4995]: I0126 23:08:51.915125 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:51Z","lastTransitionTime":"2026-01-26T23:08:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.018902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.018967 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.018985 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.019009 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.019027 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122730 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.122786 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.186900 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.187067 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.187215 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:53.18718079 +0000 UTC m=+37.351888315 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.225911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.226088 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.287986 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.288355 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.288305679 +0000 UTC m=+52.453013184 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329471 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329532 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329554 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329586 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.329608 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389072 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389309 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389353 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.389364 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389377 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389453 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389489 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389525 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.389496189 +0000 UTC m=+52.554203694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389569 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.38955023 +0000 UTC m=+52.554257725 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389597 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.389581521 +0000 UTC m=+52.554289026 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389489 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389657 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389680 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.389751 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:08.389731444 +0000 UTC m=+52.554438949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432954 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432970 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.432982 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.480852 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:11:21.999477335 +0000 UTC Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.516776 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.516875 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.516928 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.516776 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.517020 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.517164 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535718 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535791 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535816 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.535869 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637560 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.637593 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739660 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.739733 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841325 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841333 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841345 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.841355 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922483 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.922501 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.945883 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950553 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.950570 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.967152 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971350 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971397 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971426 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.971439 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:52 crc kubenswrapper[4995]: E0126 23:08:52.983474 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986590 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:52 crc kubenswrapper[4995]: I0126 23:08:52.986622 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:52Z","lastTransitionTime":"2026-01-26T23:08:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:52.999851 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:52Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.003922 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.024308 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:53Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.024458 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026260 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.026371 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129039 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129143 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129171 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129201 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.129223 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.196063 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.196377 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.196472 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:55.19644854 +0000 UTC m=+39.361156045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232235 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232316 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232341 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.232399 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335230 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335280 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335295 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.335306 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439256 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.439304 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.481162 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:52:20.727049603 +0000 UTC Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.516853 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:53 crc kubenswrapper[4995]: E0126 23:08:53.516988 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542174 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542218 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.542236 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.645604 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.645915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.646153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.646323 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.646445 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749593 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749651 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.749763 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853492 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853584 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.853636 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957073 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957167 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:53 crc kubenswrapper[4995]: I0126 23:08:53.957242 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:53Z","lastTransitionTime":"2026-01-26T23:08:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060645 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060708 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060724 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.060737 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.163248 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265857 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265920 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265942 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.265977 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368847 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.368912 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472788 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472871 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472897 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.472914 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.482247 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 22:50:03.504348274 +0000 UTC Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.516218 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.516258 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.516368 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:54 crc kubenswrapper[4995]: E0126 23:08:54.516523 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:54 crc kubenswrapper[4995]: E0126 23:08:54.516650 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:54 crc kubenswrapper[4995]: E0126 23:08:54.516843 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575281 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575371 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575399 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.575421 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684550 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684655 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.684671 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787459 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787484 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.787506 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.890388 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.890757 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.890940 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.891164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.891389 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.994877 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995163 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995360 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:54 crc kubenswrapper[4995]: I0126 23:08:54.995455 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:54Z","lastTransitionTime":"2026-01-26T23:08:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.098975 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099088 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.099213 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.201886 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.201960 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.201982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.202010 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.202031 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.216910 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:55 crc kubenswrapper[4995]: E0126 23:08:55.217205 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:55 crc kubenswrapper[4995]: E0126 23:08:55.217313 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:08:59.217282652 +0000 UTC m=+43.381990167 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305021 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305054 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305063 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.305088 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408496 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408525 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.408536 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.482388 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:00:14.91447647 +0000 UTC Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512471 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512573 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512604 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.512625 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.516760 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:55 crc kubenswrapper[4995]: E0126 23:08:55.516964 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615879 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615944 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.615990 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.616012 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720167 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720248 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.720305 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822828 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822886 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.822909 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926063 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926169 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926227 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:55 crc kubenswrapper[4995]: I0126 23:08:55.926248 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:55Z","lastTransitionTime":"2026-01-26T23:08:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029814 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029903 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029932 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.029950 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.133769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134173 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134206 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.134230 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.236894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237208 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237338 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.237624 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340784 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340855 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.340914 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449477 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.449947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.482757 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 04:43:11.767851671 +0000 UTC Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.517398 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.517398 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.517642 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:56 crc kubenswrapper[4995]: E0126 23:08:56.518129 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:56 crc kubenswrapper[4995]: E0126 23:08:56.518264 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:56 crc kubenswrapper[4995]: E0126 23:08:56.518387 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.518595 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.537435 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556600 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556719 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.556834 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.572492 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.591406 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.604336 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.616754 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.628003 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.638853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.652663 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659677 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.659689 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.667763 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.678795 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.690264 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.714270 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.728304 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.747562 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761467 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761525 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761537 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.761594 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.763742 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.773660 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.816722 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.818612 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.818992 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.834272 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.850076 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.861190 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.863875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864202 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.864283 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.870485 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.885313 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.901497 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://782351d41b229cae31f211dda67abc070fbd6464709e473b56057cf8ceb90e46\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:48Z\\\",\\\"message\\\":\\\"ointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032811 6272 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 23:08:48.032862 6272 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 23:08:48.032932 6272 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 23:08:48.033463 6272 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 23:08:48.033506 6272 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 23:08:48.033514 6272 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 23:08:48.033533 6272 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 23:08:48.033532 6272 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 23:08:48.033534 6272 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 23:08:48.033558 6272 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 23:08:48.033566 6272 factory.go:656] Stopping watch factory\\\\nI0126 23:08:48.033590 6272 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.911870 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.924709 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.938425 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.951484 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.965378 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969829 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969843 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.969854 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:56Z","lastTransitionTime":"2026-01-26T23:08:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.979466 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:56 crc kubenswrapper[4995]: I0126 23:08:56.989005 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:56Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.006634 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.019053 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.030385 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.041361 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072454 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.072804 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.158015 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.158860 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:08:57 crc kubenswrapper[4995]: E0126 23:08:57.159049 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.174193 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176504 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176524 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176548 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.176564 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.187029 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.201336 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.214288 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.226793 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.239454 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.251442 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.270881 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.278948 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.278989 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.279001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.279016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.279030 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.287018 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.298472 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.312763 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.336189 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.354437 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.366287 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.380697 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381659 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381668 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.381693 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.395750 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.425454 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:57Z is after 2025-08-24T17:21:41Z" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.482857 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:28:34.798164817 +0000 UTC Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484865 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484913 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484926 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.484958 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.517040 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:57 crc kubenswrapper[4995]: E0126 23:08:57.517192 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587593 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.587603 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690257 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690274 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.690285 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792325 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.792363 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894788 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894801 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.894830 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.996991 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997046 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:57 crc kubenswrapper[4995]: I0126 23:08:57.997117 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:57Z","lastTransitionTime":"2026-01-26T23:08:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101141 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101165 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.101179 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204012 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204079 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204120 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.204641 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307378 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307419 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307430 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307447 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.307458 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.409528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.409821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.409928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.410090 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.410262 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.483708 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 20:39:46.160438828 +0000 UTC Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512764 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.512774 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.516921 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.516935 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.517180 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:08:58 crc kubenswrapper[4995]: E0126 23:08:58.517306 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:08:58 crc kubenswrapper[4995]: E0126 23:08:58.517514 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:08:58 crc kubenswrapper[4995]: E0126 23:08:58.517577 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615608 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615621 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.615631 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717596 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717628 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717640 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717656 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.717668 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820331 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820389 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.820398 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923712 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923725 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923744 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:58 crc kubenswrapper[4995]: I0126 23:08:58.923758 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:58Z","lastTransitionTime":"2026-01-26T23:08:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025946 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.025958 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.128887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129387 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.129471 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.231930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.231979 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.232026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.232048 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.232058 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.255585 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:59 crc kubenswrapper[4995]: E0126 23:08:59.255754 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:59 crc kubenswrapper[4995]: E0126 23:08:59.255823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:07.255805854 +0000 UTC m=+51.420513329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334672 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334736 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.334766 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437151 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437187 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.437198 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.485343 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:02:39.185835455 +0000 UTC Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.516671 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:08:59 crc kubenswrapper[4995]: E0126 23:08:59.516838 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539549 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539596 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.539620 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642284 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642340 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.642379 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745660 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745716 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.745740 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848322 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848377 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848430 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848449 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.848460 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950719 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950782 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:08:59 crc kubenswrapper[4995]: I0126 23:08:59.950791 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:08:59Z","lastTransitionTime":"2026-01-26T23:08:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053163 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.053221 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155793 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.155834 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258134 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.258524 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.359995 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360046 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.360073 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462517 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462569 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462583 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.462593 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.486197 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:53:11.123137008 +0000 UTC Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.516900 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.516968 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.516974 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:00 crc kubenswrapper[4995]: E0126 23:09:00.517075 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:00 crc kubenswrapper[4995]: E0126 23:09:00.517217 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:00 crc kubenswrapper[4995]: E0126 23:09:00.517291 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565264 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565276 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565293 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.565305 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667484 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667493 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.667516 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770407 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770470 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770483 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.770492 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872290 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.872326 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:00 crc kubenswrapper[4995]: I0126 23:09:00.975669 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:00Z","lastTransitionTime":"2026-01-26T23:09:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078648 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078688 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.078738 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180594 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180640 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.180663 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282812 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282836 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282849 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.282858 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385864 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385918 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.385927 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.486437 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 22:11:54.278255027 +0000 UTC Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488620 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.488636 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.516888 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:01 crc kubenswrapper[4995]: E0126 23:09:01.517052 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.591885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.591962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.591987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.592011 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.592028 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.695275 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798750 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798818 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798841 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.798897 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901787 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901853 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901870 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901894 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:01 crc kubenswrapper[4995]: I0126 23:09:01.901912 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:01Z","lastTransitionTime":"2026-01-26T23:09:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.004629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108170 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108262 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108278 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108304 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.108321 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.211447 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.314808 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.417947 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418033 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418052 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418077 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.418095 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.487036 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 05:41:01.685706856 +0000 UTC Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.516851 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.516935 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.516856 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:02 crc kubenswrapper[4995]: E0126 23:09:02.517040 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:02 crc kubenswrapper[4995]: E0126 23:09:02.517231 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:02 crc kubenswrapper[4995]: E0126 23:09:02.517426 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.521941 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522018 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522041 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.522057 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625468 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625605 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.625629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727762 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727825 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727841 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.727851 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830691 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.830733 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934496 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934570 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:02 crc kubenswrapper[4995]: I0126 23:09:02.934617 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:02Z","lastTransitionTime":"2026-01-26T23:09:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037860 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037929 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037946 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.037986 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.113985 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.128784 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140714 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140776 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140799 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140827 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.140848 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.157759 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.176414 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.197608 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.212816 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.224164 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.240146 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243208 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.243225 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.261799 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.281880 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.315050 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.336019 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.345713 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.358132 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.379068 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397810 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397879 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397906 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397937 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.397960 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.398360 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.411525 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.415164 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419524 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419555 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.419567 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.427754 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.437348 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.438555 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441737 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441776 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.441792 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.448874 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.453963 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.457094 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.466914 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469670 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.469710 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.480947 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:03Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.481127 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482504 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482550 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.482606 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.487828 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:49:58.13719285 +0000 UTC Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.516318 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:03 crc kubenswrapper[4995]: E0126 23:09:03.516437 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584233 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584272 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584284 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.584313 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688156 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.688209 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791420 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.791440 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894458 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894512 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894527 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.894539 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997280 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997358 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997375 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:03 crc kubenswrapper[4995]: I0126 23:09:03.997386 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:03Z","lastTransitionTime":"2026-01-26T23:09:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099938 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099948 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099963 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.099973 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202725 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202804 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.202850 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306806 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.306825 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410659 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410682 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410715 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.410732 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.488333 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 06:52:25.540019378 +0000 UTC Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514258 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514274 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.514316 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.516602 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.516617 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:04 crc kubenswrapper[4995]: E0126 23:09:04.516808 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:04 crc kubenswrapper[4995]: E0126 23:09:04.516848 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.517133 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:04 crc kubenswrapper[4995]: E0126 23:09:04.517423 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.617412 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720516 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720642 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720704 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.720719 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.823753 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.823998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.824090 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.824270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.824403 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:04 crc kubenswrapper[4995]: I0126 23:09:04.927857 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:04Z","lastTransitionTime":"2026-01-26T23:09:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030703 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030802 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.030856 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134010 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134057 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.134080 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237011 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237052 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.237075 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340890 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.340926 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443823 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443848 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.443865 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.488736 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 04:53:22.10321458 +0000 UTC Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.516340 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:05 crc kubenswrapper[4995]: E0126 23:09:05.516629 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.546984 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547060 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.547075 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.650227 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753489 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753564 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753586 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.753598 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857378 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857509 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.857532 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961032 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961134 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961157 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:05 crc kubenswrapper[4995]: I0126 23:09:05.961175 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:05Z","lastTransitionTime":"2026-01-26T23:09:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.064681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065317 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065400 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.065568 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169031 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169079 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169126 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169162 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.169185 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272847 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.272866 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376419 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376441 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376463 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.376480 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480279 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480379 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.480432 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.489763 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:52:40.596810736 +0000 UTC Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.516638 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.516675 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.516782 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:06 crc kubenswrapper[4995]: E0126 23:09:06.517297 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:06 crc kubenswrapper[4995]: E0126 23:09:06.517061 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:06 crc kubenswrapper[4995]: E0126 23:09:06.517460 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.537658 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.556684 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.572203 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582537 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.582579 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.593613 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.615775 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.631488 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.661193 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.679354 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686490 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686517 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.686576 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.695653 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.714165 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.737577 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.758174 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.775827 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790176 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790335 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.790357 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.793671 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.807663 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.825703 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.856455 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.873211 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:06Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893773 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893823 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893834 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.893862 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997048 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997159 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:06 crc kubenswrapper[4995]: I0126 23:09:06.997203 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:06Z","lastTransitionTime":"2026-01-26T23:09:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100752 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100826 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100852 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.100903 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203804 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203835 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.203859 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307216 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307232 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.307271 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.336682 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:07 crc kubenswrapper[4995]: E0126 23:09:07.336819 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:07 crc kubenswrapper[4995]: E0126 23:09:07.336882 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:23.336864524 +0000 UTC m=+67.501571999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410352 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410377 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.410433 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.490913 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 13:36:35.534286534 +0000 UTC Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514518 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514609 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.514673 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.516986 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:07 crc kubenswrapper[4995]: E0126 23:09:07.517220 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618400 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618419 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618450 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.618474 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721342 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721361 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721386 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.721405 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825423 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825504 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.825516 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929464 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929513 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929532 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:07 crc kubenswrapper[4995]: I0126 23:09:07.929563 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:07Z","lastTransitionTime":"2026-01-26T23:09:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032560 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.032610 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135714 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135782 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135801 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.135852 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240628 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240652 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.240670 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344609 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344674 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344722 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.344740 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.349421 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.349894 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.349823521 +0000 UTC m=+84.514531026 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.448703 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.448833 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.448899 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.449001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.449031 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451367 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451459 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451502 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.451573 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451681 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451689 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451731 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451780 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451808 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451739 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451831 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.451772939 +0000 UTC m=+84.616480434 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451851 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451862 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.451848901 +0000 UTC m=+84.616556396 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451913 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.451892682 +0000 UTC m=+84.616600187 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.451827 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.452085 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:40.452046026 +0000 UTC m=+84.616753571 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.492196 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 11:54:29.693800367 +0000 UTC Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.516621 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.516676 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.516727 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.516897 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.517031 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:08 crc kubenswrapper[4995]: E0126 23:09:08.517205 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551447 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551526 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551543 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551563 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.551602 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654735 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654785 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.654824 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757414 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757427 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.757436 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860029 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860152 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860181 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.860200 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.963451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.963880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.964456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.964686 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:08 crc kubenswrapper[4995]: I0126 23:09:08.964819 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:08Z","lastTransitionTime":"2026-01-26T23:09:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.072878 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176036 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.176837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.177008 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.279923 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.279974 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.279987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.280003 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.280016 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.383062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.383412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.383695 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.384090 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.384387 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487909 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487971 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.487998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.488010 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.492721 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 11:47:56.190066349 +0000 UTC Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.516179 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:09 crc kubenswrapper[4995]: E0126 23:09:09.516330 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591774 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.591815 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695475 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695539 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.695598 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798656 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.798677 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:09 crc kubenswrapper[4995]: I0126 23:09:09.902226 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:09Z","lastTransitionTime":"2026-01-26T23:09:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005225 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005274 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005314 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.005330 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108270 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108370 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.108414 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212186 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212263 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.212308 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315397 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315474 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.315516 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418830 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418959 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.418977 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.492916 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 00:54:40.565124931 +0000 UTC Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.516310 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.516362 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.516430 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:10 crc kubenswrapper[4995]: E0126 23:09:10.516427 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:10 crc kubenswrapper[4995]: E0126 23:09:10.516533 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:10 crc kubenswrapper[4995]: E0126 23:09:10.516587 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.523917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.523982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.524001 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.524027 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.524046 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625811 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625870 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625912 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.625931 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728756 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728798 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.728833 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832296 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.832353 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935319 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935359 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:10 crc kubenswrapper[4995]: I0126 23:09:10.935377 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:10Z","lastTransitionTime":"2026-01-26T23:09:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039361 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039406 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039434 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.039449 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143238 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143803 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.143936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.144069 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247476 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247527 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247545 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247564 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.247577 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351087 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351119 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351136 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.351148 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454347 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454366 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454388 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.454407 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.493956 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:04:12.398807238 +0000 UTC Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.516341 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:11 crc kubenswrapper[4995]: E0126 23:09:11.516523 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.517856 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.537026 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.554872 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557612 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557686 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557718 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.557730 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.568317 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.582777 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.599460 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.612778 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.637533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.652704 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661265 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661300 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661311 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661327 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.661339 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.675613 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.691090 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.709966 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.725042 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.740766 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.754653 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763204 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.763281 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.767056 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.779749 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.806449 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.820975 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.832981 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.864996 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865039 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865067 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.865079 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.869157 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.872148 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.872542 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.904131 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.924367 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.943647 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.956438 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967068 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967077 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.967126 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:11Z","lastTransitionTime":"2026-01-26T23:09:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.972907 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:11 crc kubenswrapper[4995]: I0126 23:09:11.987802 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:11Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.010666 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.024517 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.036349 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.051261 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.062401 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069336 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069385 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.069430 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.074199 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.084398 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.096348 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.119879 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.133896 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.143356 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.155774 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171669 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171684 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171703 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.171715 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.273953 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274004 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.274044 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.375916 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.375976 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.375994 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.376023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.376041 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.478561 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.494405 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:01:46.93306268 +0000 UTC Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.517095 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.517089 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.517259 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.517403 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.517558 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.517675 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582496 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.582659 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.684961 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.684997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.685007 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.685024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.685035 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788347 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788429 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788454 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.788505 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.878465 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.879487 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/1.log" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.881691 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" exitCode=1 Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.881746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.881837 4995 scope.go:117] "RemoveContainer" containerID="ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.882375 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:12 crc kubenswrapper[4995]: E0126 23:09:12.882577 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.891878 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.891946 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.891969 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.892000 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.892022 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.905981 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.929787 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.951921 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.978955 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ea6aba5967d3a5383553577b8b82b6681e30c56bd4a4c704e4b44553c0bc5b5d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"message\\\":\\\"ler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:08:49Z is after 2025-08-24T17:21:41Z]\\\\nI0126 23:08:49.577693 6400 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:08:49.577725\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995295 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995363 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995384 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995430 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:12Z","lastTransitionTime":"2026-01-26T23:09:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:12 crc kubenswrapper[4995]: I0126 23:09:12.995487 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:12Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.011625 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.024359 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.036295 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.045932 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.056830 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.068396 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.079017 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.090351 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097910 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097918 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097935 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.097946 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.115367 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.130215 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.145782 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.157579 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.166936 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200691 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200743 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200759 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.200770 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303150 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.303217 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406910 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.406925 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.495524 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:46:50.967939099 +0000 UTC Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.509816 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.516619 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.516774 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611754 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611840 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.611859 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715221 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715301 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715362 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715395 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.715416 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765722 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765798 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765852 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.765878 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.789180 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.793426 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.808696 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813124 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813154 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813171 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.813185 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.828977 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832433 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832464 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832473 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.832496 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.849229 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853497 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853521 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853530 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853543 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.853553 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.870714 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.870872 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872465 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872495 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872506 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872521 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.872532 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.886551 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.890624 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:13 crc kubenswrapper[4995]: E0126 23:09:13.890807 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.903252 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.919828 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.932958 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.947494 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.960790 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.971776 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.974995 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975057 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:13 crc kubenswrapper[4995]: I0126 23:09:13.975086 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:13Z","lastTransitionTime":"2026-01-26T23:09:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.000395 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:13Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.017908 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.032493 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.045887 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.061500 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.075143 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077264 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.077307 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.093745 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.108121 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.120648 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.133615 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.156756 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.171088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:14Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180211 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180223 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180245 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.180259 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283737 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283820 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.283860 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387072 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387180 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.387192 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490311 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490376 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.490445 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.496532 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:01:38.047446398 +0000 UTC Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.516971 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.517017 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:14 crc kubenswrapper[4995]: E0126 23:09:14.517165 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.517213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:14 crc kubenswrapper[4995]: E0126 23:09:14.517379 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:14 crc kubenswrapper[4995]: E0126 23:09:14.517638 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593409 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593420 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.593450 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696718 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.696807 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799423 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.799432 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902292 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902352 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902370 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:14 crc kubenswrapper[4995]: I0126 23:09:14.902409 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:14Z","lastTransitionTime":"2026-01-26T23:09:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005888 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005956 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005982 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.005992 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109056 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109145 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109190 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.109205 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211600 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211670 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211706 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.211722 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314724 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314853 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.314871 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418205 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418218 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.418228 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.497678 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 11:56:27.875894447 +0000 UTC Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.517272 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:15 crc kubenswrapper[4995]: E0126 23:09:15.517440 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523630 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523701 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.523761 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626580 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626655 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626674 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.626690 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729583 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.729605 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833288 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.833333 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.935988 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936059 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936082 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:15 crc kubenswrapper[4995]: I0126 23:09:15.936174 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:15Z","lastTransitionTime":"2026-01-26T23:09:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039563 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.039610 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142130 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142213 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.142265 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245181 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.245222 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348845 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348864 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.348906 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451267 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451315 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.451335 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.498168 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 07:18:40.145091895 +0000 UTC Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.517017 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.517068 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:16 crc kubenswrapper[4995]: E0126 23:09:16.517300 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.517347 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:16 crc kubenswrapper[4995]: E0126 23:09:16.517527 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:16 crc kubenswrapper[4995]: E0126 23:09:16.517665 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.538291 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556663 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556701 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.556714 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.560236 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.572929 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.584948 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.597199 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.610031 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.620619 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.637491 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.651127 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658796 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.658886 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.663533 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.678168 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.688685 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.699349 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.714903 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.727145 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.749094 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762335 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762392 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762405 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762424 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762438 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.762969 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.774058 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:16Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864293 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864355 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864394 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.864409 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967720 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:16 crc kubenswrapper[4995]: I0126 23:09:16.967807 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:16Z","lastTransitionTime":"2026-01-26T23:09:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070292 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070344 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.070417 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.174584 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277858 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277895 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.277910 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381034 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381118 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381133 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.381167 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484221 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484234 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.484243 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.499730 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 11:45:25.023815251 +0000 UTC Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.517299 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:17 crc kubenswrapper[4995]: E0126 23:09:17.517533 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592187 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592215 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.592226 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.694998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695075 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695138 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.695153 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798361 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798423 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798472 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.798492 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901381 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901441 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901460 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:17 crc kubenswrapper[4995]: I0126 23:09:17.901499 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:17Z","lastTransitionTime":"2026-01-26T23:09:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003920 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003972 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.003996 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106814 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106870 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106909 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.106925 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210612 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210668 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.210677 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313542 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313657 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.313701 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416796 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416829 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416850 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.416859 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.500376 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 07:19:02.893840933 +0000 UTC Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.516768 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.516777 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:18 crc kubenswrapper[4995]: E0126 23:09:18.516910 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.516941 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:18 crc kubenswrapper[4995]: E0126 23:09:18.517025 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:18 crc kubenswrapper[4995]: E0126 23:09:18.517173 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518587 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518612 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518620 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518630 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.518638 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621206 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.621286 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724235 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.724252 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826397 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826407 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826422 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.826433 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928719 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928732 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:18 crc kubenswrapper[4995]: I0126 23:09:18.928761 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:18Z","lastTransitionTime":"2026-01-26T23:09:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036087 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036372 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036541 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.036661 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138672 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.138752 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241633 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241652 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.241666 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344244 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344313 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344338 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344373 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.344393 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447000 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447060 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.447122 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.501021 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 08:41:53.045482529 +0000 UTC Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.516400 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:19 crc kubenswrapper[4995]: E0126 23:09:19.516586 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549863 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549881 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.549922 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.652702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653250 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.653564 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757020 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757574 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757748 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.757923 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860686 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860745 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.860757 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962740 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962750 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962762 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:19 crc kubenswrapper[4995]: I0126 23:09:19.962770 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:19Z","lastTransitionTime":"2026-01-26T23:09:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065192 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065249 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.065277 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168086 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168156 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.168186 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272709 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.272834 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375662 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375676 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.375684 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478753 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478768 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478786 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.478798 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.501333 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:46:08.189474616 +0000 UTC Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.516480 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.516542 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.516499 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:20 crc kubenswrapper[4995]: E0126 23:09:20.516616 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:20 crc kubenswrapper[4995]: E0126 23:09:20.516711 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:20 crc kubenswrapper[4995]: E0126 23:09:20.516790 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581278 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581853 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.581957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.582053 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684592 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684657 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684678 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.684719 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787914 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787957 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787968 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.787999 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891452 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891512 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.891540 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995277 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995302 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:20 crc kubenswrapper[4995]: I0126 23:09:20.995314 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:20Z","lastTransitionTime":"2026-01-26T23:09:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097719 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097741 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.097750 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.200969 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201426 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201589 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.201698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304339 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304384 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304396 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.304424 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406168 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406223 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406239 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.406250 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.501924 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 17:43:46.244924412 +0000 UTC Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508622 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.508745 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.516785 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:21 crc kubenswrapper[4995]: E0126 23:09:21.516941 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610952 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610969 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.610987 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.611002 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.712811 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713055 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713151 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713242 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.713327 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815953 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815975 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.815983 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917727 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917766 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:21 crc kubenswrapper[4995]: I0126 23:09:21.917789 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:21Z","lastTransitionTime":"2026-01-26T23:09:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020767 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020782 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.020823 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.122928 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123225 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123329 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123422 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.123520 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226576 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226610 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226631 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.226640 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.329113 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431749 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431759 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431774 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.431784 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.502341 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:34:43.870582123 +0000 UTC Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.516350 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.516372 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.516392 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:22 crc kubenswrapper[4995]: E0126 23:09:22.516907 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:22 crc kubenswrapper[4995]: E0126 23:09:22.516924 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:22 crc kubenswrapper[4995]: E0126 23:09:22.516950 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.534687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.534973 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.535038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.535129 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.535213 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636883 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636931 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.636941 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740213 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740634 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.740709 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.848751 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849049 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849169 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849262 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.849346 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952069 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952143 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952160 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:22 crc kubenswrapper[4995]: I0126 23:09:22.952199 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:22Z","lastTransitionTime":"2026-01-26T23:09:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054595 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054656 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054674 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054697 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.054716 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157377 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157433 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157497 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.157531 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.259557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.259891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.259980 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.260050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.260126 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362628 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362690 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.362701 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.426553 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:23 crc kubenswrapper[4995]: E0126 23:09:23.426994 4995 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:23 crc kubenswrapper[4995]: E0126 23:09:23.427231 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs podName:4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4 nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.427210879 +0000 UTC m=+99.591918344 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs") pod "network-metrics-daemon-vlmfg" (UID: "4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464900 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.464936 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.503443 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 19:58:27.61206185 +0000 UTC Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.516766 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:23 crc kubenswrapper[4995]: E0126 23:09:23.516924 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567334 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567432 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567455 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.567498 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669522 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669579 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669603 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.669613 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772339 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772644 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772707 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772780 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.772860 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875019 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875383 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875712 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.875839 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.978955 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979045 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979073 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.979094 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.986917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.986975 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.986997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.987022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:23 crc kubenswrapper[4995]: I0126 23:09:23.987043 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:23Z","lastTransitionTime":"2026-01-26T23:09:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.002307 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006196 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006230 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006242 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.006251 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.017970 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022548 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022588 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022599 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.022629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.032897 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035881 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035921 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035935 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.035943 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.046013 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049334 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049395 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049408 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.049437 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.061603 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:24Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.061766 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081436 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.081446 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183341 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183391 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.183428 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285621 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285660 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.285698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387556 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387570 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.387583 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490337 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490390 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490408 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490431 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.490449 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.503754 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:35:06.619154878 +0000 UTC Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.517265 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.517275 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.517733 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.518082 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.518232 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:24 crc kubenswrapper[4995]: E0126 23:09:24.518218 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592582 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592649 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592670 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592698 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.592720 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695885 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695927 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695950 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.695959 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798142 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798193 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798205 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798223 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.798234 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900346 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900403 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:24 crc kubenswrapper[4995]: I0126 23:09:24.900434 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:24Z","lastTransitionTime":"2026-01-26T23:09:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002942 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002950 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002965 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.002975 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104545 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104605 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.104614 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207601 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207648 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207657 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.207685 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309767 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309810 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309883 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.309915 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.411867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412469 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412495 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.412513 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.504740 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:29:26.035287817 +0000 UTC Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514626 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514653 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.514664 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.516822 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:25 crc kubenswrapper[4995]: E0126 23:09:25.516909 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617382 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617417 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617429 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.617458 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720577 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720602 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.720619 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823475 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823502 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.823512 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925320 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925371 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925388 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.925400 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:25Z","lastTransitionTime":"2026-01-26T23:09:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928250 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/0.log" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928304 4995 generic.go:334] "Generic (PLEG): container finished" podID="4ba70657-ea12-4a85-9ec3-c1423b5b6912" containerID="cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81" exitCode=1 Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928333 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerDied","Data":"cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81"} Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.928753 4995 scope.go:117] "RemoveContainer" containerID="cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.943002 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.955779 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.975871 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:25 crc kubenswrapper[4995]: I0126 23:09:25.988701 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:25Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.002677 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.014712 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028494 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028523 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.028552 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.029483 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.042521 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.054308 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.068132 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.078587 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.090049 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.107340 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.121377 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131461 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131508 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131519 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131537 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.131550 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.132706 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.147333 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.162392 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.172650 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233122 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233143 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.233151 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336006 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336091 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336140 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.336156 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438438 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438488 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438516 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.438528 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.505478 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:11:14.884630087 +0000 UTC Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.516842 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.516875 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.516921 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:26 crc kubenswrapper[4995]: E0126 23:09:26.517045 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:26 crc kubenswrapper[4995]: E0126 23:09:26.517176 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:26 crc kubenswrapper[4995]: E0126 23:09:26.517295 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.528853 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.541194 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.543725 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.555016 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.566088 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.577737 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.588523 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.599944 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.627406 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.640745 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643129 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643183 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643200 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.643212 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.652010 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.662322 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.673008 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.683714 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.694273 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.706365 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.717182 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.730480 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745711 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745758 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745786 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.745797 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.755893 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848446 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848489 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848505 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.848514 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.933028 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/0.log" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.933310 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.946895 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950006 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950037 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950051 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.950086 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:26Z","lastTransitionTime":"2026-01-26T23:09:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.956010 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.967455 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:26 crc kubenswrapper[4995]: I0126 23:09:26.990687 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:26Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.005964 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.018541 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.029698 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.038869 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.050479 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052048 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.052072 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.063274 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.074726 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.083803 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.094004 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.113455 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.123940 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.136416 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.147573 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153707 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.153741 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.161346 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:27Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256522 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256534 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.256566 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358413 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358448 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358470 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.358482 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460739 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460751 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.460785 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.505631 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 18:44:45.649551966 +0000 UTC Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.516861 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:27 crc kubenswrapper[4995]: E0126 23:09:27.516982 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563370 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563412 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563425 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.563436 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666487 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666520 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666530 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666546 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.666555 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768926 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768938 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.768947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872236 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.872244 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974114 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974126 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974140 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:27 crc kubenswrapper[4995]: I0126 23:09:27.974151 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:27Z","lastTransitionTime":"2026-01-26T23:09:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076641 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076681 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076709 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.076720 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179467 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179500 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.179512 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282150 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282225 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282252 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.282262 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.384919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.384984 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.384996 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.385013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.385022 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488654 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488684 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.488698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.505975 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:15:18.815880118 +0000 UTC Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.516706 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.516743 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.516718 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:28 crc kubenswrapper[4995]: E0126 23:09:28.516893 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:28 crc kubenswrapper[4995]: E0126 23:09:28.516953 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:28 crc kubenswrapper[4995]: E0126 23:09:28.517023 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.527752 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591120 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591198 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.591251 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695443 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695456 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695477 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.695489 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798215 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798263 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798273 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.798313 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900518 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:28 crc kubenswrapper[4995]: I0126 23:09:28.900576 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:28Z","lastTransitionTime":"2026-01-26T23:09:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007768 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007810 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007819 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007836 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.007845 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110518 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110552 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110577 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.110587 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213452 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213503 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213515 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213533 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.213546 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315737 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315794 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.315805 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420574 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420591 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420611 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.420626 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.506151 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 08:25:48.610619021 +0000 UTC Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.516505 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:29 crc kubenswrapper[4995]: E0126 23:09:29.516934 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.517263 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:29 crc kubenswrapper[4995]: E0126 23:09:29.517457 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522650 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.522676 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625320 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625348 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625355 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.625376 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729485 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729551 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729569 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.729581 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832248 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832313 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832333 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.832344 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934212 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934224 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934242 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:29 crc kubenswrapper[4995]: I0126 23:09:29.934254 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:29Z","lastTransitionTime":"2026-01-26T23:09:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.035960 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036007 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036036 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.036048 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139231 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139257 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.139309 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.242851 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.242947 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.242966 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.243022 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.243045 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345855 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345909 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.345959 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448119 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.448159 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.506719 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:35:36.666724779 +0000 UTC Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.519233 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.519653 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:30 crc kubenswrapper[4995]: E0126 23:09:30.519794 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.519995 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:30 crc kubenswrapper[4995]: E0126 23:09:30.520238 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:30 crc kubenswrapper[4995]: E0126 23:09:30.520400 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551259 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551269 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.551300 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654237 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654249 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654271 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.654285 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757687 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757702 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.757711 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861038 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861092 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861149 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861167 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.861186 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963823 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963901 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963924 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963958 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:30 crc kubenswrapper[4995]: I0126 23:09:30.963981 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:30Z","lastTransitionTime":"2026-01-26T23:09:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066696 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066731 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066756 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.066768 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169561 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169618 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169667 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.169684 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272865 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272890 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.272906 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375435 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375501 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375513 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375529 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.375542 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478492 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478579 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478645 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.478712 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.507805 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 18:27:17.673384107 +0000 UTC Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.517240 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:31 crc kubenswrapper[4995]: E0126 23:09:31.517403 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581092 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581137 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581150 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.581175 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.682970 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683004 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683027 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.683036 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785309 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785573 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785775 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.785868 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889053 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889115 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889125 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889138 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.889148 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991649 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:31 crc kubenswrapper[4995]: I0126 23:09:31.991678 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:31Z","lastTransitionTime":"2026-01-26T23:09:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094053 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094266 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094297 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094312 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.094324 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196417 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196510 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196548 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196567 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.196581 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299287 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299305 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299330 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.299349 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403882 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403906 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.403919 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507218 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507279 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507296 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.507332 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.508275 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 17:31:03.902238588 +0000 UTC Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.517308 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.517345 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:32 crc kubenswrapper[4995]: E0126 23:09:32.517445 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.517466 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:32 crc kubenswrapper[4995]: E0126 23:09:32.517618 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:32 crc kubenswrapper[4995]: E0126 23:09:32.517691 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609255 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609268 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.609279 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.711565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.711822 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.711907 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.712058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.712166 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815296 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815331 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815341 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815354 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.815363 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919338 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919368 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:32 crc kubenswrapper[4995]: I0126 23:09:32.919380 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:32Z","lastTransitionTime":"2026-01-26T23:09:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022159 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022482 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022673 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.022862 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.023035 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.125985 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126040 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126052 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126070 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.126082 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228586 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228594 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228608 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.228617 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331199 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331230 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.331246 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433807 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433875 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433898 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.433946 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.509191 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:35:43.173393138 +0000 UTC Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.516663 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:33 crc kubenswrapper[4995]: E0126 23:09:33.517030 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536760 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536936 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.536950 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640190 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640249 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640265 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640289 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.640309 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743744 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743867 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.743889 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846856 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846906 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846917 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.846939 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949458 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949488 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949510 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:33 crc kubenswrapper[4995]: I0126 23:09:33.949519 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:33Z","lastTransitionTime":"2026-01-26T23:09:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.052457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.052896 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.053401 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.053842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.054270 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157542 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157614 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157638 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.157655 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260367 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260421 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260438 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.260452 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.369998 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370094 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370155 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.370178 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.434984 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435042 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435059 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.435130 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.455896 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461313 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461531 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461821 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.461950 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.482426 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487545 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487613 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487637 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.487688 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.510661 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:37:55.016346118 +0000 UTC Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.511084 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515829 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515877 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515892 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515916 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.515932 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.516473 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.516627 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.516521 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.517003 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.517189 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.517363 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.531985 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536579 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536632 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536690 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536717 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.536731 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.551877 4995 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d1cbdfe9-1842-4004-b68d-332d972c0049\\\",\\\"systemUUID\\\":\\\"95aab811-f2d5-4faf-a048-4477d37cf623\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:34Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:34 crc kubenswrapper[4995]: E0126 23:09:34.552120 4995 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553536 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553584 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553597 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553615 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.553629 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656723 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656750 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656759 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656771 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.656789 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.759353 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.759792 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.760166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.760465 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.760751 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863596 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863661 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863678 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.863691 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966251 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966286 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966295 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966309 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:34 crc kubenswrapper[4995]: I0126 23:09:34.966318 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:34Z","lastTransitionTime":"2026-01-26T23:09:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069489 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069562 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069581 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.069593 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173084 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173166 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173177 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.173207 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275808 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275873 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275887 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.275917 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377647 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377690 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377700 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377725 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.377737 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480053 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480211 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.480267 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.511559 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:21:35.695350027 +0000 UTC Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.516914 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:35 crc kubenswrapper[4995]: E0126 23:09:35.517045 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583845 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583883 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583895 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.583923 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687019 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687061 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687069 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687085 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.687093 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789779 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789871 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789904 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789933 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.789959 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892795 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892842 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892854 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892872 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.892885 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995247 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995290 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995299 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995314 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:35 crc kubenswrapper[4995]: I0126 23:09:35.995322 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:35Z","lastTransitionTime":"2026-01-26T23:09:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097234 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097263 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.097276 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.200457 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205416 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205544 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205673 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.205698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308874 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308908 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308919 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308934 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.308945 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411863 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411900 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411912 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411930 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.411940 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.512453 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:34:52.220789802 +0000 UTC Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515159 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515170 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515191 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.515201 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.516378 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.516484 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:36 crc kubenswrapper[4995]: E0126 23:09:36.516665 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.516721 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:36 crc kubenswrapper[4995]: E0126 23:09:36.517062 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:36 crc kubenswrapper[4995]: E0126 23:09:36.517256 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.541351 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be4486f1-6ac2-4655-aff8-634049c9aa6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:12Z\\\",\\\"message\\\":\\\"CP\\\\\\\", inport:8442, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.4.167\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:8444, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0126 23:09:12.386001 6705 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Switch Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:c94130be-172c-477c-88c4-40cc7eba30fe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 23:09:12.385981 6705 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf after 0 failed attempt(s)\\\\nI0126 23:09:12.386022 6705 default_network_controller.go:776] Recording success event on pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI0126 23:09:12.385075 6705 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:09:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l9xmp_openshift-ovn-kubernetes(be4486f1-6ac2-4655-aff8-634049c9aa6c)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ngr8z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l9xmp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.553670 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a724234-c0d3-4f4d-8995-9c26af415bae\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a55e11716925cce81c41c9f11fb000386beeb8b70e04254b605df03a4203004\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b4968022ac9ab52cfea33d3fccf8e070660139e224bba28dc4ade8a43c05bf46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4968022ac9ab52cfea33d3fccf8e070660139e224bba28dc4ade8a43c05bf46\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.566819 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b9b734b7-3aee-408e-a92b-e4ede146aa53\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9199f55438d6286f90fb562d5edea35f3ac3d48a13f517dae77629d629ca767e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0190c2bc73623be599b64246a67ed4fab67a5e627fd47dfe10ffd7a53e41611\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85dd28da1762e79dc0b1b05f4d40dd30d7f9f3dc51226f33cd25d44a5c398d50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de9079d4534282c7a3ebb9cd58dcc200269ac2555f24ffe7cd033b32aad68142\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.585267 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.601681 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://446dd603677d05fc2d94ab3283b77e499779290bbf6ca065a57ab1eec7e61f71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e81549b02fe5596aae11b3faef6bf39d8ed55f576b587cba0bd80a088241f71e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.616799 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6528e40b754bda96b0e1681301d123979419b2ce4eca574db920f6d3f3ec64f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618024 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618071 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618081 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618114 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.618127 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.627050 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-m8zlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"15f852ca-fb3b-4ad2-836a-d0dbe735dde4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://006b22efd6bd78e5bf9368e370b312f96f21f8cee03b4e1ad912e47fa3552e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-884rn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-m8zlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.638989 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://524ab99258ef691881311dc90822448afe3aa41ee3c8cd9d9ab1b169bc636d63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-clj2d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-sj7pr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.653317 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ceee02e-61ac-4e0c-af1d-39aff19627ac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"g.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477943 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 23:08:36.477947 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 23:08:36.477950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 23:08:36.477952 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 23:08:36.477954 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 23:08:36.478199 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0126 23:08:36.493956 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0126 23:08:36.493999 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0126 23:08:36.494037 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494058 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0126 23:08:36.494074 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0126 23:08:36.494081 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0126 23:08:36.494181 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0126 23:08:36.494191 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nF0126 23:08:36.493692 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.668462 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-xltwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d39f52ec-0319-4f38-b9f5-7f472d8006c5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54dd9423269177bb8d34dae09c8b36b16439ddc14e99eeeb3b278a98520c2fae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vwzch\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:40Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-xltwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.683040 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1ef1196f-dfec-4c45-9abc-0cd1df4bc941\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://70293941290df2463db8b24514522a280ddaff677786bbc7333b068603b81966\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae63ba3d96755a005e64661770c408b6b66b4bdb2532dcae14a30b5a16302abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wp4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2rkl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.702735 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5e3309c-ef08-42c4-b03f-3ff1a9f9e43d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://629b038d33e862f84506727ee22beb7acc3df1b4429acebfec742d78fc413dc7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c9cc540b4b4d76dc1e8c096a1436b8d0d359e4342abc2c83eead4adb720ee5cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://797d12994bf7537a636ba1d2df1186e1e3c1ed99ef96a5b2e86ae71ce98419ca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b99a1adae1a000e597557f2590c5e9e05cb207fbff3a84169ee7445ff86bd98b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfb97accc4c21e11c651f422802afe192235b29effd52574bc7ed09bf8f6dfcb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11e4827e2670a359bff20484128002418e8cbb954aa56ada11411492fc4aa762\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ca0d2c45c1df4c1d0f1393a49873717da42b0e4a0070f31dece4a64f7c237e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afe3c8a24130ce591c4e777aed4cc6587e8a071511ab352061563ca4bc7d7b1e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.716965 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"46b46f80-6d25-424f-bb27-f25876bb0ac0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fa51758084cc19b4b0dec2071ae3b7cbd1eae83ddb5a96857d3587b591623a84\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://579c8da451190f1c7047518c22e356d2f8f8d5eaec8a147cf41d0451f29d485e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://73fd817229318d319999b34a2d007f11d77cf4ef0589d723f519fa04bb19afd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:16Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720553 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720663 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.720676 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.733905 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fda2ecf5b972004633d6f4aa78ecb5f82df915e52c303b6a1c062845790d1832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.745564 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.755945 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:51Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5xtg8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:51Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vlmfg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.772721 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:36Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.790321 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pkt82" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b7acc40a-3d17-4c4f-8300-2fa8c89564a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4655234466c5b74ddad4092b0190863c924c4c07a44e6fef30d61c45e099d950\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:08:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d9f145e01bf8293e4daeb24ac3c489fd0bc2c7f3ccdf2b0dca6c7a1cb156b03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0f2c35fc47d0d6994386f26a081e5a8c73741d6b292821f4ef397cbba39f0c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://22ea11ebfc502bce5964c5cf4e7efef841071c6e94280bc2ffb8e4d2a716019f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1886a3b0037bc64140c438e116b60d83862d49341d5b77a49b0c2e431b188a89\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49a9dcdcb1d5561335c3bbda97abf9951ce54afecb02cdbef3dac1fb2d702d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f4f1d8f4f44198847772c9b5d04dc4062fef136f1cf6764572180f92aacebe26\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T23:08:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7rmfp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pkt82\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.808054 4995 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hln88" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ba70657-ea12-4a85-9ec3-c1423b5b6912\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:08:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T23:09:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T23:09:25Z\\\",\\\"message\\\":\\\"2026-01-26T23:08:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1\\\\n2026-01-26T23:08:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_1748e47f-a8f8-47fd-a4c1-61d9634d10c1 to /host/opt/cni/bin/\\\\n2026-01-26T23:08:40Z [verbose] multus-daemon started\\\\n2026-01-26T23:08:40Z [verbose] Readiness Indicator file check\\\\n2026-01-26T23:09:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T23:08:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T23:09:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25pf7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T23:08:38Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hln88\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T23:09:36Z is after 2025-08-24T17:21:41Z" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824587 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824693 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824797 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.824873 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927248 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927307 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927324 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:36 crc kubenswrapper[4995]: I0126 23:09:36.927335 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:36Z","lastTransitionTime":"2026-01-26T23:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.029962 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030018 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030029 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.030057 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132747 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132755 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.132778 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235128 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235173 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235189 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235210 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.235225 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338625 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338641 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.338687 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440498 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440559 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440578 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440601 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.440618 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.513477 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 12:57:23.941395266 +0000 UTC Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.516801 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:37 crc kubenswrapper[4995]: E0126 23:09:37.516936 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542671 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542769 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542781 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542800 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.542812 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646026 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646093 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646131 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.646142 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748798 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748839 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748847 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748864 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.748874 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851763 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851860 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851881 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851916 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.851933 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954622 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954675 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954689 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:37 crc kubenswrapper[4995]: I0126 23:09:37.954698 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:37Z","lastTransitionTime":"2026-01-26T23:09:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057175 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057234 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057261 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.057270 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160502 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160565 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160583 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160606 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.160624 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263665 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263732 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263746 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263765 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.263778 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366246 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366283 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366293 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366307 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.366317 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.469890 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.469964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.469988 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.470015 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.470034 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.513783 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:21:49.200370145 +0000 UTC Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.517119 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.517130 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:38 crc kubenswrapper[4995]: E0126 23:09:38.517272 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.517309 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:38 crc kubenswrapper[4995]: E0126 23:09:38.517426 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:38 crc kubenswrapper[4995]: E0126 23:09:38.517483 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573439 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573452 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573469 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.573481 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675791 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675862 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675879 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675903 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.675924 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779429 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779486 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779507 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779535 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.779555 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883547 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883627 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883649 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883679 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.883701 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986085 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986127 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986136 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:38 crc kubenswrapper[4995]: I0126 23:09:38.986156 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:38Z","lastTransitionTime":"2026-01-26T23:09:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089503 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089515 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089532 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.089543 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192405 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192462 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.192491 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295164 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295229 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295252 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.295270 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398298 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398359 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398376 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398410 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.398428 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500685 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500733 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500748 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500766 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.500780 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.514335 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 12:37:31.915524318 +0000 UTC Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.516679 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:39 crc kubenswrapper[4995]: E0126 23:09:39.516866 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602398 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602437 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602448 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602465 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.602476 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704590 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704669 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704688 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704715 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.704733 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807770 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807827 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807844 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807868 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.807889 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.910915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911023 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:39 crc kubenswrapper[4995]: I0126 23:09:39.911127 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:39Z","lastTransitionTime":"2026-01-26T23:09:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.013922 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.013991 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.014013 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.014044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.014069 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117008 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117065 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117083 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117132 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.117149 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220393 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220416 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.220436 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.322990 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323044 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323055 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.323085 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.408316 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.408608 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.408579841 +0000 UTC m=+148.573287336 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.426955 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427032 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427050 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427073 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.427092 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510321 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510410 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510437 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.510468 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510546 4995 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510605 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510616 4995 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510630 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510678 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510708 4995 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510623 4995 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510731 4995 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510631 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510612217 +0000 UTC m=+148.675319682 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510788 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510769761 +0000 UTC m=+148.675477306 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510794801 +0000 UTC m=+148.675502266 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.510840 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:10:44.510831502 +0000 UTC m=+148.675539067 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.514685 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:10:32.493000462 +0000 UTC Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.517005 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.517075 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.517186 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.517242 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.517295 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:40 crc kubenswrapper[4995]: E0126 23:09:40.517381 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529751 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529837 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.529849 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633140 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633211 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.633222 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736355 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736433 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736445 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736462 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.736472 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838635 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838689 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838699 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838712 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.838721 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940843 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940880 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940889 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:40 crc kubenswrapper[4995]: I0126 23:09:40.940911 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:40Z","lastTransitionTime":"2026-01-26T23:09:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042835 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042891 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042902 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042915 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.042924 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145365 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145435 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145451 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.145462 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248076 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248180 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248222 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.248239 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351318 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351389 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351404 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351430 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.351448 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453162 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453194 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453201 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453214 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.453224 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.515710 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:12:08.949807294 +0000 UTC Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.516533 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:41 crc kubenswrapper[4995]: E0126 23:09:41.516676 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555153 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555195 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555209 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.555236 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658351 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658434 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658449 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.658498 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760788 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760845 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760855 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760878 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.760889 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.863566 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864005 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864016 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864032 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.864043 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966066 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966124 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966133 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966148 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:41 crc kubenswrapper[4995]: I0126 23:09:41.966158 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:41Z","lastTransitionTime":"2026-01-26T23:09:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068069 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068121 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068132 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068147 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.068158 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171062 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171145 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171162 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171189 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.171206 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.273884 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.273983 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.274007 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.274037 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.274064 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375866 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375910 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375921 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375937 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.375947 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.478950 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479021 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479033 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479054 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.479067 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516546 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:31:05.972224646 +0000 UTC Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516672 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516711 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.516739 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:42 crc kubenswrapper[4995]: E0126 23:09:42.524133 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:42 crc kubenswrapper[4995]: E0126 23:09:42.524830 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:42 crc kubenswrapper[4995]: E0126 23:09:42.525027 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.525927 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581138 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581172 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581182 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581197 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.581206 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.683944 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.683983 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.683994 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.684012 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.684024 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.785964 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786344 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786357 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.786389 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889243 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889282 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889291 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889308 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.889317 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.979321 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.980970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerStarted","Data":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.981883 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991335 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991360 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991369 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991381 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:42 crc kubenswrapper[4995]: I0126 23:09:42.991389 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:42Z","lastTransitionTime":"2026-01-26T23:09:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.022446 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podStartSLOduration=66.022428581 podStartE2EDuration="1m6.022428581s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:42.999986576 +0000 UTC m=+87.164694041" watchObservedRunningTime="2026-01-26 23:09:43.022428581 +0000 UTC m=+87.187136046" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.022570 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podStartSLOduration=66.022566074 podStartE2EDuration="1m6.022566074s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.022278528 +0000 UTC m=+87.186986013" watchObservedRunningTime="2026-01-26 23:09:43.022566074 +0000 UTC m=+87.187273539" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.034941 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.034922674 podStartE2EDuration="15.034922674s" podCreationTimestamp="2026-01-26 23:09:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.034042013 +0000 UTC m=+87.198749498" watchObservedRunningTime="2026-01-26 23:09:43.034922674 +0000 UTC m=+87.199630139" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.064973 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=40.064952523 podStartE2EDuration="40.064952523s" podCreationTimestamp="2026-01-26 23:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.049847027 +0000 UTC m=+87.214554512" watchObservedRunningTime="2026-01-26 23:09:43.064952523 +0000 UTC m=+87.229659988" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093684 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093721 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093729 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093742 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.093752 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.104162 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-m8zlz" podStartSLOduration=66.104143054 podStartE2EDuration="1m6.104143054s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.103180821 +0000 UTC m=+87.267888296" watchObservedRunningTime="2026-01-26 23:09:43.104143054 +0000 UTC m=+87.268850519" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.140266 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=67.140242701 podStartE2EDuration="1m7.140242701s" podCreationTimestamp="2026-01-26 23:08:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.126021376 +0000 UTC m=+87.290728851" watchObservedRunningTime="2026-01-26 23:09:43.140242701 +0000 UTC m=+87.304950166" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.153187 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2rkl7" podStartSLOduration=66.153169054 podStartE2EDuration="1m6.153169054s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.153019611 +0000 UTC m=+87.317727066" watchObservedRunningTime="2026-01-26 23:09:43.153169054 +0000 UTC m=+87.317876519" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.153715 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xltwc" podStartSLOduration=66.153708918 podStartE2EDuration="1m6.153708918s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.140725002 +0000 UTC m=+87.305432467" watchObservedRunningTime="2026-01-26 23:09:43.153708918 +0000 UTC m=+87.318416383" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.192148 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=65.19213061 podStartE2EDuration="1m5.19213061s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.174703377 +0000 UTC m=+87.339410862" watchObservedRunningTime="2026-01-26 23:09:43.19213061 +0000 UTC m=+87.356838075" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.192377 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=63.192372046 podStartE2EDuration="1m3.192372046s" podCreationTimestamp="2026-01-26 23:08:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.191147976 +0000 UTC m=+87.355855441" watchObservedRunningTime="2026-01-26 23:09:43.192372046 +0000 UTC m=+87.357079511" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195608 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195652 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195666 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195683 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.195696 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298374 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298648 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298734 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298838 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.298924 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.301485 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vlmfg"] Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.301594 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:43 crc kubenswrapper[4995]: E0126 23:09:43.301686 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.311182 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pkt82" podStartSLOduration=66.311165809 podStartE2EDuration="1m6.311165809s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.310581345 +0000 UTC m=+87.475288810" watchObservedRunningTime="2026-01-26 23:09:43.311165809 +0000 UTC m=+87.475873284" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.400997 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401047 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401058 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401074 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.401387 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.502973 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503008 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503018 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503033 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.503042 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.518520 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 05:46:02.850305499 +0000 UTC Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605184 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605217 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605226 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605240 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.605250 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707441 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707481 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707493 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707511 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.707525 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809478 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809528 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809538 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809554 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.809570 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911756 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911805 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911817 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911832 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:43 crc kubenswrapper[4995]: I0126 23:09:43.911841 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:43Z","lastTransitionTime":"2026-01-26T23:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013705 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013764 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013774 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013809 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.013820 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117247 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117290 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117306 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117327 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.117339 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.219995 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220035 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220043 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220057 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.220068 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322554 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322604 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322621 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322639 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.322651 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425461 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425522 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425536 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425557 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.425572 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.516974 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.517048 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.516976 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:44 crc kubenswrapper[4995]: E0126 23:09:44.517132 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 23:09:44 crc kubenswrapper[4995]: E0126 23:09:44.517211 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 23:09:44 crc kubenswrapper[4995]: E0126 23:09:44.517390 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.518969 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:32:54.340777943 +0000 UTC Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528418 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528453 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528461 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528499 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.528509 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630854 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630911 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630925 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630945 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.630960 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660739 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660778 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660789 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660813 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.660827 4995 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T23:09:44Z","lastTransitionTime":"2026-01-26T23:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.706540 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hln88" podStartSLOduration=67.706521806 podStartE2EDuration="1m7.706521806s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:43.323700423 +0000 UTC m=+87.488407898" watchObservedRunningTime="2026-01-26 23:09:44.706521806 +0000 UTC m=+88.871229281" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.706751 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc"] Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.707195 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.711204 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.712362 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.712374 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.713755 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.751709 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.751963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.752157 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d797ab32-8a7c-4f54-be9b-26cdab54574d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.752310 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d797ab32-8a7c-4f54-be9b-26cdab54574d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.752466 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d797ab32-8a7c-4f54-be9b-26cdab54574d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853795 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853826 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d797ab32-8a7c-4f54-be9b-26cdab54574d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853883 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d797ab32-8a7c-4f54-be9b-26cdab54574d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853901 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.853921 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d797ab32-8a7c-4f54-be9b-26cdab54574d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.854195 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/d797ab32-8a7c-4f54-be9b-26cdab54574d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.855831 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d797ab32-8a7c-4f54-be9b-26cdab54574d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.863065 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d797ab32-8a7c-4f54-be9b-26cdab54574d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:44 crc kubenswrapper[4995]: I0126 23:09:44.889867 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d797ab32-8a7c-4f54-be9b-26cdab54574d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2r7rc\" (UID: \"d797ab32-8a7c-4f54-be9b-26cdab54574d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.022722 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.516585 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:45 crc kubenswrapper[4995]: E0126 23:09:45.516891 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vlmfg" podUID="4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.519840 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:35:28.932084443 +0000 UTC Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.519924 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.526927 4995 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.759198 4995 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.759406 4995 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.803950 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.804489 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.807299 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.807979 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.808365 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.813580 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.813877 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.814395 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.815929 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.816227 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.816441 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.816701 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.818369 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.818594 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.818895 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819062 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819133 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819074 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819229 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819298 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.819601 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.826088 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.827260 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.829972 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.830634 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.831242 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klb9g"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.832020 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.832615 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833834 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833930 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.834016 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833969 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.833967 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.834241 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.834552 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.837490 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.838291 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.839871 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.840270 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.840546 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.840838 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.841254 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.842674 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.845095 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-pfw4t"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.852817 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.856151 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kwqrx"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.856794 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.856885 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.858061 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.858222 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.863511 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.864056 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.865202 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw55h"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.865824 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866145 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4r5mm"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866281 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-auth-proxy-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866314 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866342 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mx2\" (UniqueName: \"kubernetes.io/projected/1d547650-1fdd-4334-9376-5f5b165d5069-kube-api-access-h2mx2\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866372 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866414 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d547650-1fdd-4334-9376-5f5b165d5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866469 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kntsd\" (UniqueName: \"kubernetes.io/projected/492ea284-e9af-45ce-ac55-c5d8168be715-kube-api-access-kntsd\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866490 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-images\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866531 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866553 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866652 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-config\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866714 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866747 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866786 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqgv\" (UniqueName: \"kubernetes.io/projected/b345a51c-ec48-4066-a49b-713e73429c2d-kube-api-access-4cqgv\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866823 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d547650-1fdd-4334-9376-5f5b165d5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866858 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-encryption-config\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866893 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9dc5\" (UniqueName: \"kubernetes.io/projected/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-kube-api-access-s9dc5\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866951 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-dir\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.866993 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867033 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b345a51c-ec48-4066-a49b-713e73429c2d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867059 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djz7g\" (UniqueName: \"kubernetes.io/projected/d8feb049-3911-43fa-bd25-6ecee076d1ed-kube-api-access-djz7g\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867086 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867409 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867491 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867596 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867684 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867708 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-client\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867738 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.867902 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868031 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868036 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868212 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868242 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-policies\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868263 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868284 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868333 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868360 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868383 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492ea284-e9af-45ce-ac55-c5d8168be715-serving-cert\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868408 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868385 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868433 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blxvp\" (UniqueName: \"kubernetes.io/projected/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-kube-api-access-blxvp\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868483 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc5qk\" (UniqueName: \"kubernetes.io/projected/ce7a362e-896b-4492-ac2c-08bd19bba7b4-kube-api-access-kc5qk\") pod \"downloads-7954f5f757-pfw4t\" (UID: \"ce7a362e-896b-4492-ac2c-08bd19bba7b4\") " pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868516 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-service-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868423 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868538 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8feb049-3911-43fa-bd25-6ecee076d1ed-machine-approver-tls\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868551 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868560 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-serving-cert\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868589 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-config\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868657 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.868968 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.869178 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.869197 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.869336 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.870145 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.870644 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.871412 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.871967 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.872400 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh55c"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.873008 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.877715 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879002 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jr8qp"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879372 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v665q"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879760 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.879846 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.880220 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.882590 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.882881 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.882919 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.883415 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.883704 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.883746 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.884651 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.886981 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.887413 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.899462 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.900718 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.902606 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.902976 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903332 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903381 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903578 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903620 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903661 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.903742 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.904522 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.905947 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.909811 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.916315 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.916526 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.917706 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.917944 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.918472 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.918666 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.918858 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919010 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919181 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919344 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919525 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919682 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919796 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.919901 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920070 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920212 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920316 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920476 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.920672 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.923373 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.923580 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926206 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926442 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926511 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.926688 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935732 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935834 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935904 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.935996 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936061 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936213 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936592 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936654 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936687 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936814 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.936597 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.938273 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.941637 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.942366 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.944735 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.944964 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.945562 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.945644 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.948235 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.949536 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.949719 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.949785 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.950056 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.950470 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.950825 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.951417 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tw45t"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.952084 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.952274 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-crsqt"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.952769 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.954709 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.955724 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.956224 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.956530 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.957606 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.958381 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.961224 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.961536 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.961880 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.969792 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.969869 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970617 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-images\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970728 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970746 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-config\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970771 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-image-import-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970793 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970816 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970834 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970861 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970908 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqgv\" (UniqueName: \"kubernetes.io/projected/b345a51c-ec48-4066-a49b-713e73429c2d-kube-api-access-4cqgv\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970943 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d547650-1fdd-4334-9376-5f5b165d5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/321948cb-6f71-4375-b575-ee960cd49bc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.970996 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971019 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/053917dd-5476-46d8-b9d4-2a1433d86697-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971044 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9dc5\" (UniqueName: \"kubernetes.io/projected/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-kube-api-access-s9dc5\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-serving-cert\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971072 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971085 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-default-certificate\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971128 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-encryption-config\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971166 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-dir\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971187 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sgtz\" (UniqueName: \"kubernetes.io/projected/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-kube-api-access-5sgtz\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971216 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-serving-cert\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rcc\" (UniqueName: \"kubernetes.io/projected/053917dd-5476-46d8-b9d4-2a1433d86697-kube-api-access-24rcc\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971293 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971314 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b345a51c-ec48-4066-a49b-713e73429c2d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971339 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djz7g\" (UniqueName: \"kubernetes.io/projected/d8feb049-3911-43fa-bd25-6ecee076d1ed-kube-api-access-djz7g\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971361 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit-dir\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971387 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971410 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e85666ee-5696-465c-9682-802e968660ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971431 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfxs\" (UniqueName: \"kubernetes.io/projected/dedff685-1753-453d-a4ec-4e48b74cfdc4-kube-api-access-8tfxs\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971457 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96211e14-9e17-4511-8523-609ff907f5c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971496 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96211e14-9e17-4511-8523-609ff907f5c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971520 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971537 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-client\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971604 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e85666ee-5696-465c-9682-802e968660ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971627 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96211e14-9e17-4511-8523-609ff907f5c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971643 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-encryption-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971663 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971682 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-service-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971702 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krr67\" (UniqueName: \"kubernetes.io/projected/6ff36f00-70ac-4a9c-96f6-ade70040b187-kube-api-access-krr67\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-stats-auth\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971740 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/321948cb-6f71-4375-b575-ee960cd49bc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971760 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971762 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971782 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-client\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971799 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-serving-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971823 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971844 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-client\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971874 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfbq\" (UniqueName: \"kubernetes.io/projected/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-kube-api-access-fzfbq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971900 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971919 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971941 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971964 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-policies\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.971994 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972034 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-metrics-certs\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972062 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972119 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972139 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972161 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492ea284-e9af-45ce-ac55-c5d8168be715-serving-cert\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972184 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dedff685-1753-453d-a4ec-4e48b74cfdc4-proxy-tls\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972206 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blxvp\" (UniqueName: \"kubernetes.io/projected/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-kube-api-access-blxvp\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972228 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/321948cb-6f71-4375-b575-ee960cd49bc2-kube-api-access-276q6\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972248 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7bb\" (UniqueName: \"kubernetes.io/projected/4b695371-523f-41fd-a8de-6bbc9ce319e0-kube-api-access-4d7bb\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972270 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-images\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972290 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dc4d5e-e13d-4d4d-b1f8-390149f24544-service-ca-bundle\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972322 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972346 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kc5qk\" (UniqueName: \"kubernetes.io/projected/ce7a362e-896b-4492-ac2c-08bd19bba7b4-kube-api-access-kc5qk\") pod \"downloads-7954f5f757-pfw4t\" (UID: \"ce7a362e-896b-4492-ac2c-08bd19bba7b4\") " pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972370 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5cvn\" (UniqueName: \"kubernetes.io/projected/24dc4d5e-e13d-4d4d-b1f8-390149f24544-kube-api-access-v5cvn\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972404 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-trusted-ca\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972425 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-service-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972448 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-node-pullsecrets\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972739 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972763 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8feb049-3911-43fa-bd25-6ecee076d1ed-machine-approver-tls\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972780 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972805 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-serving-cert\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wk6w\" (UniqueName: \"kubernetes.io/projected/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-kube-api-access-2wk6w\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-config\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972894 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972925 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85666ee-5696-465c-9682-802e968660ec-config\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972959 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2mx2\" (UniqueName: \"kubernetes.io/projected/1d547650-1fdd-4334-9376-5f5b165d5069-kube-api-access-h2mx2\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972990 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-auth-proxy-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973012 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973034 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-config\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973057 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973078 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-config\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973198 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-config\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973270 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973767 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-policies\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973823 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-audit-dir\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.973927 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974176 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-service-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974428 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974572 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-config\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972414 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-images\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.974623 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-encryption-config\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.972543 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d547650-1fdd-4334-9376-5f5b165d5069-config\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.975415 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-auth-proxy-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.976555 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977044 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-serving-cert\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977774 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b695371-523f-41fd-a8de-6bbc9ce319e0-serving-cert\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977869 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kntsd\" (UniqueName: \"kubernetes.io/projected/492ea284-e9af-45ce-ac55-c5d8168be715-kube-api-access-kntsd\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.977897 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/053917dd-5476-46d8-b9d4-2a1433d86697-proxy-tls\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.978312 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d547650-1fdd-4334-9376-5f5b165d5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.979217 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d8feb049-3911-43fa-bd25-6ecee076d1ed-machine-approver-tls\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.979531 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.980233 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.981786 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.981993 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.982395 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.982869 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.982522 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8feb049-3911-43fa-bd25-6ecee076d1ed-config\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.983475 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.984778 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.985705 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/492ea284-e9af-45ce-ac55-c5d8168be715-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.985799 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d547650-1fdd-4334-9376-5f5b165d5069-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.986817 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.986822 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x9shl"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.987758 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.988163 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.988526 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-etcd-client\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.989633 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.991005 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.991504 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.992481 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.993183 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klb9g"] Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.993816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/b345a51c-ec48-4066-a49b-713e73429c2d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:45 crc kubenswrapper[4995]: I0126 23:09:45.997317 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z4xpf"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.001925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/492ea284-e9af-45ce-ac55-c5d8168be715-serving-cert\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.006609 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.008283 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k4xnx"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009039 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" event={"ID":"d797ab32-8a7c-4f54-be9b-26cdab54574d","Type":"ContainerStarted","Data":"4269eea915fafc6279ea18bdde0ab6bd1012e75f2790e7b038990f724838def5"} Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009079 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" event={"ID":"d797ab32-8a7c-4f54-be9b-26cdab54574d","Type":"ContainerStarted","Data":"fb055c7ffd385417b2cf1e93558d6497aa49c8b59d684d277165e43964cc04a5"} Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009245 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009541 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009667 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pfw4t"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009704 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kwqrx"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009718 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh55c"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009730 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009744 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4r5mm"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.009758 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tsdjk"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010322 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010351 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010429 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.010681 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw55h"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.011793 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.013199 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jr8qp"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.014640 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.016073 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.017455 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.018520 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.019483 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.020486 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.021686 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wt84d"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.022502 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.022600 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8m6w4"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.023233 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.023855 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.025282 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.026225 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.026562 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-crsqt"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.027687 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wt84d"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.028838 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.030219 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x9shl"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.031406 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v665q"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.032768 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.034135 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.036819 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.036852 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.037221 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z4xpf"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.038364 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k4xnx"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.039408 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.040381 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8m6w4"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.041519 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.042442 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh"] Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.046637 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.066329 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dedff685-1753-453d-a4ec-4e48b74cfdc4-proxy-tls\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079383 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/321948cb-6f71-4375-b575-ee960cd49bc2-kube-api-access-276q6\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079401 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7bb\" (UniqueName: \"kubernetes.io/projected/4b695371-523f-41fd-a8de-6bbc9ce319e0-kube-api-access-4d7bb\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079418 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-images\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079433 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dc4d5e-e13d-4d4d-b1f8-390149f24544-service-ca-bundle\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079454 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5cvn\" (UniqueName: \"kubernetes.io/projected/24dc4d5e-e13d-4d4d-b1f8-390149f24544-kube-api-access-v5cvn\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079478 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-trusted-ca\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079747 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-node-pullsecrets\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079770 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.079826 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-node-pullsecrets\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080583 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-trusted-ca\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080688 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080743 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080769 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wk6w\" (UniqueName: \"kubernetes.io/projected/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-kube-api-access-2wk6w\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080788 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080806 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85666ee-5696-465c-9682-802e968660ec-config\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080829 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-config\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080859 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-config\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080906 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b695371-523f-41fd-a8de-6bbc9ce319e0-serving-cert\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080931 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/053917dd-5476-46d8-b9d4-2a1433d86697-proxy-tls\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080968 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-image-import-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.080985 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081000 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081033 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/321948cb-6f71-4375-b575-ee960cd49bc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081048 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/053917dd-5476-46d8-b9d4-2a1433d86697-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081442 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b695371-523f-41fd-a8de-6bbc9ce319e0-config\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.081748 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/321948cb-6f71-4375-b575-ee960cd49bc2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082358 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082400 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-serving-cert\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082436 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-default-certificate\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082465 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sgtz\" (UniqueName: \"kubernetes.io/projected/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-kube-api-access-5sgtz\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082480 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-serving-cert\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082495 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rcc\" (UniqueName: \"kubernetes.io/projected/053917dd-5476-46d8-b9d4-2a1433d86697-kube-api-access-24rcc\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082520 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit-dir\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082534 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082548 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e85666ee-5696-465c-9682-802e968660ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082564 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tfxs\" (UniqueName: \"kubernetes.io/projected/dedff685-1753-453d-a4ec-4e48b74cfdc4-kube-api-access-8tfxs\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082580 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96211e14-9e17-4511-8523-609ff907f5c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082595 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96211e14-9e17-4511-8523-609ff907f5c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082611 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082625 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082639 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082653 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-client\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082667 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e85666ee-5696-465c-9682-802e968660ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082682 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96211e14-9e17-4511-8523-609ff907f5c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082696 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-encryption-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082710 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082723 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-service-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krr67\" (UniqueName: \"kubernetes.io/projected/6ff36f00-70ac-4a9c-96f6-ade70040b187-kube-api-access-krr67\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082752 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-stats-auth\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/321948cb-6f71-4375-b575-ee960cd49bc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082783 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082798 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-client\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082814 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-serving-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082829 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/053917dd-5476-46d8-b9d4-2a1433d86697-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082832 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfbq\" (UniqueName: \"kubernetes.io/projected/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-kube-api-access-fzfbq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-metrics-certs\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082901 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.082368 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-image-import-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.083538 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.088990 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-audit-dir\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.089817 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-trusted-ca-bundle\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.090435 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b695371-523f-41fd-a8de-6bbc9ce319e0-serving-cert\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.090810 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-serving-ca\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.090915 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.091233 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-encryption-config\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.091476 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-serving-cert\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.091897 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.092426 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.092960 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-client\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.093685 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.093866 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.095452 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.096185 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-serving-cert\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.098684 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-etcd-client\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.100647 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/321948cb-6f71-4375-b575-ee960cd49bc2-serving-cert\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.101837 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-config\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.106772 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.113567 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.126619 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.136831 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6ff36f00-70ac-4a9c-96f6-ade70040b187-etcd-service-ca\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.154093 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.166967 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.187253 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.206667 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.233208 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.245888 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.266364 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.286891 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.307701 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.320484 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96211e14-9e17-4511-8523-609ff907f5c5-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.327220 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.346623 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.355872 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96211e14-9e17-4511-8523-609ff907f5c5-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.367115 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.387371 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.400751 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e85666ee-5696-465c-9682-802e968660ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.407185 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.411781 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e85666ee-5696-465c-9682-802e968660ec-config\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.427980 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.448486 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.467120 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.487610 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.508809 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.520025 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.520608 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.520636 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.527809 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.535396 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/053917dd-5476-46d8-b9d4-2a1433d86697-proxy-tls\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.547067 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.587233 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.591711 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dedff685-1753-453d-a4ec-4e48b74cfdc4-images\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.607181 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.627822 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.632690 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dedff685-1753-453d-a4ec-4e48b74cfdc4-proxy-tls\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.646170 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.666782 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.686681 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.708249 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.726711 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.736898 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-default-certificate\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.747554 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.766994 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.781653 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-stats-auth\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.787591 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.798252 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/24dc4d5e-e13d-4d4d-b1f8-390149f24544-metrics-certs\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.806791 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.827714 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.830654 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24dc4d5e-e13d-4d4d-b1f8-390149f24544-service-ca-bundle\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.847196 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.868448 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.888082 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.900600 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.908072 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.927413 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.934725 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.947182 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.965304 4995 request.go:700] Waited for 1.008515668s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0 Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.967313 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 23:09:46 crc kubenswrapper[4995]: I0126 23:09:46.987689 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.006774 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.026777 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.047522 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.067782 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.086534 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.127348 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.147058 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.194485 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqgv\" (UniqueName: \"kubernetes.io/projected/b345a51c-ec48-4066-a49b-713e73429c2d-kube-api-access-4cqgv\") pod \"cluster-samples-operator-665b6dd947-gqbzs\" (UID: \"b345a51c-ec48-4066-a49b-713e73429c2d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.201372 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"route-controller-manager-6576b87f9c-qgp7d\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.225442 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9dc5\" (UniqueName: \"kubernetes.io/projected/49ad869c-a391-4d0b-99fa-74e9d7ef4e87-kube-api-access-s9dc5\") pod \"machine-api-operator-5694c8668f-klb9g\" (UID: \"49ad869c-a391-4d0b-99fa-74e9d7ef4e87\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.243505 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"controller-manager-879f6c89f-zp6fr\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.260116 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2mx2\" (UniqueName: \"kubernetes.io/projected/1d547650-1fdd-4334-9376-5f5b165d5069-kube-api-access-h2mx2\") pod \"openshift-apiserver-operator-796bbdcf4f-znswc\" (UID: \"1d547650-1fdd-4334-9376-5f5b165d5069\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.282530 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djz7g\" (UniqueName: \"kubernetes.io/projected/d8feb049-3911-43fa-bd25-6ecee076d1ed-kube-api-access-djz7g\") pod \"machine-approver-56656f9798-hpqgt\" (UID: \"d8feb049-3911-43fa-bd25-6ecee076d1ed\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.301311 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blxvp\" (UniqueName: \"kubernetes.io/projected/cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26-kube-api-access-blxvp\") pod \"apiserver-7bbb656c7d-vmkbr\" (UID: \"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.320334 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kc5qk\" (UniqueName: \"kubernetes.io/projected/ce7a362e-896b-4492-ac2c-08bd19bba7b4-kube-api-access-kc5qk\") pod \"downloads-7954f5f757-pfw4t\" (UID: \"ce7a362e-896b-4492-ac2c-08bd19bba7b4\") " pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.340180 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kntsd\" (UniqueName: \"kubernetes.io/projected/492ea284-e9af-45ce-ac55-c5d8168be715-kube-api-access-kntsd\") pod \"authentication-operator-69f744f599-kwqrx\" (UID: \"492ea284-e9af-45ce-ac55-c5d8168be715\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.346297 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.360241 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.366419 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.386688 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.387460 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.407163 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.408487 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.428459 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.438297 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.447393 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.447529 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.458384 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.467935 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.488624 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.507645 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.513811 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.516320 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.518783 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.522569 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.527003 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.549375 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.568298 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.591871 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.606667 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.637171 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.646920 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.667509 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.689433 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.707674 4995 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.727216 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.738121 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc"] Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.744890 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d547650_1fdd_4334_9376_5f5b165d5069.slice/crio-4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1 WatchSource:0}: Error finding container 4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1: Status 404 returned error can't find the container with id 4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1 Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.747530 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.768639 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.768891 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.786625 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.806511 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.806894 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.813787 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-klb9g"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.826989 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.838402 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ad869c_a391_4d0b_99fa_74e9d7ef4e87.slice/crio-a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a WatchSource:0}: Error finding container a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a: Status 404 returned error can't find the container with id a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.846861 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.868881 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.887903 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.900619 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.902404 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.906763 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.927227 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.937766 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fb6bf0f_13dc_4a58_853b_98c00142f0bb.slice/crio-8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad WatchSource:0}: Error finding container 8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad: Status 404 returned error can't find the container with id 8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad Jan 26 23:09:47 crc kubenswrapper[4995]: W0126 23:09:47.938369 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5c78ad_3088_4100_90ac_f863bb21e4a2.slice/crio-d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685 WatchSource:0}: Error finding container d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685: Status 404 returned error can't find the container with id d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685 Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.948886 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.967138 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.986869 4995 request.go:700] Waited for 1.963380855s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 26 23:09:47 crc kubenswrapper[4995]: I0126 23:09:47.989094 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.006712 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-pfw4t"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.006989 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.009198 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kwqrx"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.013618 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerStarted","Data":"d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.014656 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" event={"ID":"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26","Type":"ContainerStarted","Data":"523a5f1e7efbb572c3d937c5e62a145ddeafb9c6b41005eba4d11a3100ac9f14"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.015903 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" event={"ID":"b345a51c-ec48-4066-a49b-713e73429c2d","Type":"ContainerStarted","Data":"45b75743cba30f2c8a78a317ddd77768854d359302b222c39b9b3a2bd78be747"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.017113 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" event={"ID":"1d547650-1fdd-4334-9376-5f5b165d5069","Type":"ContainerStarted","Data":"5f3ce2ef41a53c46bf833e6429b57d5cf13622c47fc6a7e5f62273de632158a4"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.017141 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" event={"ID":"1d547650-1fdd-4334-9376-5f5b165d5069","Type":"ContainerStarted","Data":"4778f78459737840e9a72718e7db8924346b38a41d3feee27e107579d31c9df1"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.019383 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" event={"ID":"49ad869c-a391-4d0b-99fa-74e9d7ef4e87","Type":"ContainerStarted","Data":"a74040d9ae5bf9f8fce9a8f0603062970123419329645d0dba86c22ccd41a82a"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.028038 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" event={"ID":"d8feb049-3911-43fa-bd25-6ecee076d1ed","Type":"ContainerStarted","Data":"86eb06a8e536fbfcd920efd785e31b9b8ba36ecad3469b48a509c15e56c657b0"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.028075 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" event={"ID":"d8feb049-3911-43fa-bd25-6ecee076d1ed","Type":"ContainerStarted","Data":"2546af6483ff28b3428fc94911d2fddd4c2eeab77073ad65fa5554a13e61af3b"} Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.028336 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.032014 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerStarted","Data":"8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad"} Jan 26 23:09:48 crc kubenswrapper[4995]: W0126 23:09:48.036037 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod492ea284_e9af_45ce_ac55_c5d8168be715.slice/crio-e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f WatchSource:0}: Error finding container e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f: Status 404 returned error can't find the container with id e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.046585 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.084137 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7bb\" (UniqueName: \"kubernetes.io/projected/4b695371-523f-41fd-a8de-6bbc9ce319e0-kube-api-access-4d7bb\") pod \"console-operator-58897d9998-4r5mm\" (UID: \"4b695371-523f-41fd-a8de-6bbc9ce319e0\") " pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.100474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276q6\" (UniqueName: \"kubernetes.io/projected/321948cb-6f71-4375-b575-ee960cd49bc2-kube-api-access-276q6\") pod \"openshift-config-operator-7777fb866f-dh55c\" (UID: \"321948cb-6f71-4375-b575-ee960cd49bc2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.129815 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5cvn\" (UniqueName: \"kubernetes.io/projected/24dc4d5e-e13d-4d4d-b1f8-390149f24544-kube-api-access-v5cvn\") pod \"router-default-5444994796-tw45t\" (UID: \"24dc4d5e-e13d-4d4d-b1f8-390149f24544\") " pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.144213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.145692 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wk6w\" (UniqueName: \"kubernetes.io/projected/d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba-kube-api-access-2wk6w\") pod \"multus-admission-controller-857f4d67dd-crsqt\" (UID: \"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.160293 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfbq\" (UniqueName: \"kubernetes.io/projected/8e46628e-0c8d-4128-b57c-ad324ff9f9bc-kube-api-access-fzfbq\") pod \"control-plane-machine-set-operator-78cbb6b69f-s4cw2\" (UID: \"8e46628e-0c8d-4128-b57c-ad324ff9f9bc\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.166771 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.190737 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sgtz\" (UniqueName: \"kubernetes.io/projected/ee963cde-b7bc-4699-9b45-aaa3b7df0e38-kube-api-access-5sgtz\") pod \"apiserver-76f77b778f-v665q\" (UID: \"ee963cde-b7bc-4699-9b45-aaa3b7df0e38\") " pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.202553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"console-f9d7485db-zt9nn\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.223903 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krr67\" (UniqueName: \"kubernetes.io/projected/6ff36f00-70ac-4a9c-96f6-ade70040b187-kube-api-access-krr67\") pod \"etcd-operator-b45778765-jr8qp\" (UID: \"6ff36f00-70ac-4a9c-96f6-ade70040b187\") " pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.249733 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/96211e14-9e17-4511-8523-609ff907f5c5-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tw5mh\" (UID: \"96211e14-9e17-4511-8523-609ff907f5c5\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.252330 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.264497 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.265448 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rcc\" (UniqueName: \"kubernetes.io/projected/053917dd-5476-46d8-b9d4-2a1433d86697-kube-api-access-24rcc\") pod \"machine-config-controller-84d6567774-2f7qc\" (UID: \"053917dd-5476-46d8-b9d4-2a1433d86697\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.275666 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.285907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tfxs\" (UniqueName: \"kubernetes.io/projected/dedff685-1753-453d-a4ec-4e48b74cfdc4-kube-api-access-8tfxs\") pod \"machine-config-operator-74547568cd-kl2g4\" (UID: \"dedff685-1753-453d-a4ec-4e48b74cfdc4\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.302093 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e85666ee-5696-465c-9682-802e968660ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lrcb9\" (UID: \"e85666ee-5696-465c-9682-802e968660ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.313493 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.326823 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.345641 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-4r5mm"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.347268 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.367619 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.409311 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dh55c"] Jan 26 23:09:48 crc kubenswrapper[4995]: W0126 23:09:48.420987 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod321948cb_6f71_4375_b575_ee960cd49bc2.slice/crio-2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518 WatchSource:0}: Error finding container 2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518: Status 404 returned error can't find the container with id 2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518 Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.427499 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430593 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec91f390-afe7-440e-b452-3f0bd7e65862-metrics-tls\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430661 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-srv-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430741 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430758 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841a4225-c083-4025-bd1e-c6cd2ebf2b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430773 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430841 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430909 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430929 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430964 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.430987 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7rn\" (UniqueName: \"kubernetes.io/projected/41fedfb8-9381-43a2-8f78-2dea53ad7882-kube-api-access-bj7rn\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431011 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-config\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431044 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431067 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431085 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhlfg\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-kube-api-access-xhlfg\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431245 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431348 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.431478 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:48.931467355 +0000 UTC m=+93.096174820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431685 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431719 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431739 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.431769 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432040 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432064 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432128 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/841a4225-c083-4025-bd1e-c6cd2ebf2b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432180 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nqc\" (UniqueName: \"kubernetes.io/projected/466a813e-97dd-4113-b15c-1e0216edca40-kube-api-access-s2nqc\") pod \"migrator-59844c95c7-fk27l\" (UID: \"466a813e-97dd-4113-b15c-1e0216edca40\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432197 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwn5n\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-kube-api-access-qwn5n\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432215 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432231 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbdvk\" (UniqueName: \"kubernetes.io/projected/09fe04fa-126d-4c84-948f-55b13dad9e24-kube-api-access-lbdvk\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432267 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432283 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432304 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432326 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41fedfb8-9381-43a2-8f78-2dea53ad7882-metrics-tls\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432348 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkq2\" (UniqueName: \"kubernetes.io/projected/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-kube-api-access-5gkq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432431 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432506 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432539 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432567 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432599 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec91f390-afe7-440e-b452-3f0bd7e65862-trusted-ca\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432644 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.432725 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.447305 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.456295 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.459912 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-crsqt"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.477219 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.494967 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.497963 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.514209 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" Jan 26 23:09:48 crc kubenswrapper[4995]: W0126 23:09:48.517051 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e46628e_0c8d_4128_b57c_ad324ff9f9bc.slice/crio-a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d WatchSource:0}: Error finding container a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d: Status 404 returned error can't find the container with id a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.522361 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.532285 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.533527 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.533760 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.033736327 +0000 UTC m=+93.198443792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534081 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-plugins-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfgj8\" (UniqueName: \"kubernetes.io/projected/8d0941b6-29be-464b-91b9-ecd2e8545dc0-kube-api-access-zfgj8\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534240 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-cabundle\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534304 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534352 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/841a4225-c083-4025-bd1e-c6cd2ebf2b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0941b6-29be-464b-91b9-ecd2e8545dc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534397 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x8xg\" (UniqueName: \"kubernetes.io/projected/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-kube-api-access-9x8xg\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534446 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534475 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534554 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nqc\" (UniqueName: \"kubernetes.io/projected/466a813e-97dd-4113-b15c-1e0216edca40-kube-api-access-s2nqc\") pod \"migrator-59844c95c7-fk27l\" (UID: \"466a813e-97dd-4113-b15c-1e0216edca40\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534605 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwn5n\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-kube-api-access-qwn5n\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534642 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534691 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbdvk\" (UniqueName: \"kubernetes.io/projected/09fe04fa-126d-4c84-948f-55b13dad9e24-kube-api-access-lbdvk\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534737 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-registration-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534756 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534774 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41fedfb8-9381-43a2-8f78-2dea53ad7882-metrics-tls\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534822 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxdsj\" (UniqueName: \"kubernetes.io/projected/3272988d-332d-4fe7-a794-c262bb6d8e11-kube-api-access-jxdsj\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534857 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkq2\" (UniqueName: \"kubernetes.io/projected/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-kube-api-access-5gkq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.534971 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-tmpfs\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535026 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535048 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thpnm\" (UniqueName: \"kubernetes.io/projected/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-kube-api-access-thpnm\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535069 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535119 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4cr4\" (UniqueName: \"kubernetes.io/projected/119edb68-a6b6-4bdf-9f74-c14211a24ecd-kube-api-access-g4cr4\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535140 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535202 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5hsj\" (UniqueName: \"kubernetes.io/projected/c9544187-4d8b-4764-bfdb-067d6d6d06b4-kube-api-access-z5hsj\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535236 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535281 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535312 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec91f390-afe7-440e-b452-3f0bd7e65862-trusted-ca\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535422 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-key\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535469 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-cert\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535516 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535566 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535606 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535641 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec91f390-afe7-440e-b452-3f0bd7e65862-metrics-tls\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535686 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-srv-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535705 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skv4h\" (UniqueName: \"kubernetes.io/projected/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-kube-api-access-skv4h\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535734 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535751 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535797 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0941b6-29be-464b-91b9-ecd2e8545dc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535827 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535844 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-webhook-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535862 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-csi-data-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535907 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841a4225-c083-4025-bd1e-c6cd2ebf2b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535930 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.535963 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-mountpoint-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536083 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536116 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfnd\" (UniqueName: \"kubernetes.io/projected/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-kube-api-access-8nfnd\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536135 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1b8e08-3212-4197-a8e7-db12babb6414-config-volume\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536155 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536170 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-socket-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536211 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrcn\" (UniqueName: \"kubernetes.io/projected/ab1b8e08-3212-4197-a8e7-db12babb6414-kube-api-access-dxrcn\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536236 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536252 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab1b8e08-3212-4197-a8e7-db12babb6414-metrics-tls\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536294 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7rn\" (UniqueName: \"kubernetes.io/projected/41fedfb8-9381-43a2-8f78-2dea53ad7882-kube-api-access-bj7rn\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536312 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgg7n\" (UniqueName: \"kubernetes.io/projected/480d13a8-eecc-4614-9b43-fd3fb5f28695-kube-api-access-rgg7n\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536333 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-config\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536351 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-profile-collector-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536559 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536582 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhlfg\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-kube-api-access-xhlfg\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536605 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536636 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536653 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9544187-4d8b-4764-bfdb-067d6d6d06b4-config\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536668 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-srv-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.536715 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.537148 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.03713552 +0000 UTC m=+93.201842985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.537188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.537942 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538452 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-certs\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538496 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538541 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538544 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538561 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538584 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-node-bootstrap-token\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538605 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538630 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9544187-4d8b-4764-bfdb-067d6d6d06b4-serving-cert\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.538718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119edb68-a6b6-4bdf-9f74-c14211a24ecd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.539901 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.540342 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ec91f390-afe7-440e-b452-3f0bd7e65862-trusted-ca\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.546616 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548156 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548192 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548211 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548459 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.548977 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/841a4225-c083-4025-bd1e-c6cd2ebf2b85-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549005 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549332 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549411 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.549756 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.551185 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.553022 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/841a4225-c083-4025-bd1e-c6cd2ebf2b85-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.556017 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.556212 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.556622 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec91f390-afe7-440e-b452-3f0bd7e65862-metrics-tls\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.557345 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-config\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.558172 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.558399 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.559762 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.561506 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.569930 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.574355 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41fedfb8-9381-43a2-8f78-2dea53ad7882-metrics-tls\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.579540 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.587468 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.587593 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwn5n\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-kube-api-access-qwn5n\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.587634 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/09fe04fa-126d-4c84-948f-55b13dad9e24-srv-cert\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.607715 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ec91f390-afe7-440e-b452-3f0bd7e65862-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zpckj\" (UID: \"ec91f390-afe7-440e-b452-3f0bd7e65862\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.633926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhlfg\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-kube-api-access-xhlfg\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641413 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641554 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641583 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skv4h\" (UniqueName: \"kubernetes.io/projected/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-kube-api-access-skv4h\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641600 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641621 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0941b6-29be-464b-91b9-ecd2e8545dc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641643 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-webhook-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641658 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-csi-data-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-mountpoint-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641701 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfnd\" (UniqueName: \"kubernetes.io/projected/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-kube-api-access-8nfnd\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1b8e08-3212-4197-a8e7-db12babb6414-config-volume\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641734 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-socket-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641749 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrcn\" (UniqueName: \"kubernetes.io/projected/ab1b8e08-3212-4197-a8e7-db12babb6414-kube-api-access-dxrcn\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab1b8e08-3212-4197-a8e7-db12babb6414-metrics-tls\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641789 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgg7n\" (UniqueName: \"kubernetes.io/projected/480d13a8-eecc-4614-9b43-fd3fb5f28695-kube-api-access-rgg7n\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641809 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-profile-collector-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641833 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641849 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9544187-4d8b-4764-bfdb-067d6d6d06b4-config\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.641864 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-srv-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642574 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-certs\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-node-bootstrap-token\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642619 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642643 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119edb68-a6b6-4bdf-9f74-c14211a24ecd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642657 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9544187-4d8b-4764-bfdb-067d6d6d06b4-serving-cert\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642674 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-plugins-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642697 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfgj8\" (UniqueName: \"kubernetes.io/projected/8d0941b6-29be-464b-91b9-ecd2e8545dc0-kube-api-access-zfgj8\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642712 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-cabundle\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0941b6-29be-464b-91b9-ecd2e8545dc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642753 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x8xg\" (UniqueName: \"kubernetes.io/projected/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-kube-api-access-9x8xg\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642770 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642809 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-registration-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642828 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxdsj\" (UniqueName: \"kubernetes.io/projected/3272988d-332d-4fe7-a794-c262bb6d8e11-kube-api-access-jxdsj\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642855 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-tmpfs\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642870 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thpnm\" (UniqueName: \"kubernetes.io/projected/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-kube-api-access-thpnm\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642910 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4cr4\" (UniqueName: \"kubernetes.io/projected/119edb68-a6b6-4bdf-9f74-c14211a24ecd-kube-api-access-g4cr4\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642925 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642942 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5hsj\" (UniqueName: \"kubernetes.io/projected/c9544187-4d8b-4764-bfdb-067d6d6d06b4-kube-api-access-z5hsj\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642968 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-key\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.642982 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-cert\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.645769 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-cert\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.646907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.647693 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.650873 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-node-bootstrap-token\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.651092 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-registration-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.651570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-tmpfs\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.651671 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-certs\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.651761 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.151743011 +0000 UTC m=+93.316450476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.654431 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-apiservice-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.655340 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.655502 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.656088 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d0941b6-29be-464b-91b9-ecd2e8545dc0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658291 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-webhook-cert\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658341 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-csi-data-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658403 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-mountpoint-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.658811 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-plugins-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.659798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-cabundle\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.660050 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-socket-dir\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.662777 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-profile-collector-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.663653 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.664317 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119edb68-a6b6-4bdf-9f74-c14211a24ecd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.667225 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/480d13a8-eecc-4614-9b43-fd3fb5f28695-srv-cert\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.677857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8d0941b6-29be-464b-91b9-ecd2e8545dc0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.677863 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ab1b8e08-3212-4197-a8e7-db12babb6414-metrics-tls\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.678291 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9544187-4d8b-4764-bfdb-067d6d6d06b4-config\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.679282 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9544187-4d8b-4764-bfdb-067d6d6d06b4-serving-cert\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.680220 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3272988d-332d-4fe7-a794-c262bb6d8e11-signing-key\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.681155 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1b8e08-3212-4197-a8e7-db12babb6414-config-volume\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.682940 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nqc\" (UniqueName: \"kubernetes.io/projected/466a813e-97dd-4113-b15c-1e0216edca40-kube-api-access-s2nqc\") pod \"migrator-59844c95c7-fk27l\" (UID: \"466a813e-97dd-4113-b15c-1e0216edca40\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.686936 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"oauth-openshift-558db77b4-tzh2d\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.706759 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbdvk\" (UniqueName: \"kubernetes.io/projected/09fe04fa-126d-4c84-948f-55b13dad9e24-kube-api-access-lbdvk\") pod \"olm-operator-6b444d44fb-8llf9\" (UID: \"09fe04fa-126d-4c84-948f-55b13dad9e24\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.725696 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkq2\" (UniqueName: \"kubernetes.io/projected/75e69d02-9a6a-4bea-b3f5-1537ef5e2516-kube-api-access-5gkq2\") pod \"openshift-controller-manager-operator-756b6f6bc6-7txcz\" (UID: \"75e69d02-9a6a-4bea-b3f5-1537ef5e2516\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.728488 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.734159 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.749332 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.750299 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.250281863 +0000 UTC m=+93.414989328 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.751331 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.765093 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.765920 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj7rn\" (UniqueName: \"kubernetes.io/projected/41fedfb8-9381-43a2-8f78-2dea53ad7882-kube-api-access-bj7rn\") pod \"dns-operator-744455d44c-pw55h\" (UID: \"41fedfb8-9381-43a2-8f78-2dea53ad7882\") " pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.770139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-v665q"] Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.784376 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/841a4225-c083-4025-bd1e-c6cd2ebf2b85-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hdscw\" (UID: \"841a4225-c083-4025-bd1e-c6cd2ebf2b85\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.798037 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.805596 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.809958 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/da8ddf95-03f1-4cce-8ddb-22ea3735eb59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llpkl\" (UID: \"da8ddf95-03f1-4cce-8ddb-22ea3735eb59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.846334 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.850272 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.850854 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.350835514 +0000 UTC m=+93.515542979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.868090 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxdsj\" (UniqueName: \"kubernetes.io/projected/3272988d-332d-4fe7-a794-c262bb6d8e11-kube-api-access-jxdsj\") pod \"service-ca-9c57cc56f-z4xpf\" (UID: \"3272988d-332d-4fe7-a794-c262bb6d8e11\") " pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.868370 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.880839 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.884135 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thpnm\" (UniqueName: \"kubernetes.io/projected/475d4d77-5500-4d9d-8d5f-c9fe0f47364b-kube-api-access-thpnm\") pod \"ingress-canary-8m6w4\" (UID: \"475d4d77-5500-4d9d-8d5f-c9fe0f47364b\") " pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.904290 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"collect-profiles-29491140-x67tv\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.924443 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4cr4\" (UniqueName: \"kubernetes.io/projected/119edb68-a6b6-4bdf-9f74-c14211a24ecd-kube-api-access-g4cr4\") pod \"package-server-manager-789f6589d5-zbzdl\" (UID: \"119edb68-a6b6-4bdf-9f74-c14211a24ecd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.925183 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skv4h\" (UniqueName: \"kubernetes.io/projected/7943ea01-9b7a-4a9b-9b13-6ef8203dd43b-kube-api-access-skv4h\") pod \"packageserver-d55dfcdfc-nglhh\" (UID: \"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.925996 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.941963 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5hsj\" (UniqueName: \"kubernetes.io/projected/c9544187-4d8b-4764-bfdb-067d6d6d06b4-kube-api-access-z5hsj\") pod \"service-ca-operator-777779d784-x9shl\" (UID: \"c9544187-4d8b-4764-bfdb-067d6d6d06b4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.952340 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:48 crc kubenswrapper[4995]: E0126 23:09:48.953037 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.452694716 +0000 UTC m=+93.617402171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.958809 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.966340 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrcn\" (UniqueName: \"kubernetes.io/projected/ab1b8e08-3212-4197-a8e7-db12babb6414-kube-api-access-dxrcn\") pod \"dns-default-wt84d\" (UID: \"ab1b8e08-3212-4197-a8e7-db12babb6414\") " pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.969391 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.973523 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8m6w4" Jan 26 23:09:48 crc kubenswrapper[4995]: I0126 23:09:48.989742 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfnd\" (UniqueName: \"kubernetes.io/projected/3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae-kube-api-access-8nfnd\") pod \"csi-hostpathplugin-k4xnx\" (UID: \"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae\") " pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.010492 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfgj8\" (UniqueName: \"kubernetes.io/projected/8d0941b6-29be-464b-91b9-ecd2e8545dc0-kube-api-access-zfgj8\") pod \"kube-storage-version-migrator-operator-b67b599dd-g66hh\" (UID: \"8d0941b6-29be-464b-91b9-ecd2e8545dc0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.026231 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgg7n\" (UniqueName: \"kubernetes.io/projected/480d13a8-eecc-4614-9b43-fd3fb5f28695-kube-api-access-rgg7n\") pod \"catalog-operator-68c6474976-zsl8z\" (UID: \"480d13a8-eecc-4614-9b43-fd3fb5f28695\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.036658 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.054069 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.054463 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.554448116 +0000 UTC m=+93.719155581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.055019 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"marketplace-operator-79b997595-phjts\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.068287 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x8xg\" (UniqueName: \"kubernetes.io/projected/926aea35-dcee-4eb0-9b2b-9c7c95c11ae8-kube-api-access-9x8xg\") pod \"machine-config-server-tsdjk\" (UID: \"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8\") " pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.070676 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" event={"ID":"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba","Type":"ContainerStarted","Data":"fdea8b0c418edfb48588d14ff27888d2ae3c0eb483299fd04faef569333e6eda"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.093219 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tw45t" event={"ID":"24dc4d5e-e13d-4d4d-b1f8-390149f24544","Type":"ContainerStarted","Data":"c9b20b52a0f18ec9712faa056f61b19c7cdb8212487a56b1c5f4717c2628f871"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.093265 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tw45t" event={"ID":"24dc4d5e-e13d-4d4d-b1f8-390149f24544","Type":"ContainerStarted","Data":"6244d662c35438f9f0b7fb0195f92df949f74ffed8d054990d997743c5a7aed9"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.094983 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerStarted","Data":"f8da331ad5479ba2deada0b967ed7ea0fd7ef2bec4a402a501182d5512dc16e8"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.098848 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerStarted","Data":"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.099149 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.116564 4995 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qgp7d container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.116610 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.118084 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" event={"ID":"b345a51c-ec48-4066-a49b-713e73429c2d","Type":"ContainerStarted","Data":"6fd6e36ee51a843b166cff419e72e3a4c2b8aa612494781178c175d203e9e522"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.118146 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" event={"ID":"b345a51c-ec48-4066-a49b-713e73429c2d","Type":"ContainerStarted","Data":"37f9c83ea30ad860cd20b48ac40f89a5da1b31a207ad12326342bbd5724e8f42"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.120087 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" event={"ID":"49ad869c-a391-4d0b-99fa-74e9d7ef4e87","Type":"ContainerStarted","Data":"aeb5e8675e5432ec2f975c8753f3114b7245f1a9f137f445c5910713e45ab72f"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.120134 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" event={"ID":"49ad869c-a391-4d0b-99fa-74e9d7ef4e87","Type":"ContainerStarted","Data":"8cac44f772bd2c32925480f955b085667662b01fbe75994873ebd78a8f7af5ca"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.124016 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.137573 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" event={"ID":"4b695371-523f-41fd-a8de-6bbc9ce319e0","Type":"ContainerStarted","Data":"959992f9a8bbbf6fc66596650fcd767418dbf672b1e8093ffe33becc678071ca"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.137606 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" event={"ID":"4b695371-523f-41fd-a8de-6bbc9ce319e0","Type":"ContainerStarted","Data":"44f8c92f494763f6b6e1265078a24a2a0549eaaf74bbfcf09562e0b8234afe66"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.138231 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.142366 4995 patch_prober.go:28] interesting pod/console-operator-58897d9998-4r5mm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.142405 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" podUID="4b695371-523f-41fd-a8de-6bbc9ce319e0" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.158534 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pfw4t" event={"ID":"ce7a362e-896b-4492-ac2c-08bd19bba7b4","Type":"ContainerStarted","Data":"b7cb9a79d82b0aa5048b3d1e45243664ced238be2f1ae2225e2202f12d4aaf1b"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.158569 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-pfw4t" event={"ID":"ce7a362e-896b-4492-ac2c-08bd19bba7b4","Type":"ContainerStarted","Data":"4cd69f16d57d53d32131d187ff3a24fd15cad9aa0917d7fa171ecd9a9da1b143"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.159299 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.159344 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.160688 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.660667674 +0000 UTC m=+93.825375179 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.173058 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.173126 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.177786 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" event={"ID":"492ea284-e9af-45ce-ac55-c5d8168be715","Type":"ContainerStarted","Data":"99114bb9953be1339bd024eddcc2898314389027efa02c8a1e1729d06736c331"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.177834 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" event={"ID":"492ea284-e9af-45ce-ac55-c5d8168be715","Type":"ContainerStarted","Data":"e46b930e42c8ea7f14d84979b545c9877df1aa9b16dfd8c60c63026508d66b8f"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.182380 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" event={"ID":"8e46628e-0c8d-4128-b57c-ad324ff9f9bc","Type":"ContainerStarted","Data":"942c0ce88f527e4fce712a6de5fab759daf8ffa24382477ded4546a39e4e7c88"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.182414 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" event={"ID":"8e46628e-0c8d-4128-b57c-ad324ff9f9bc","Type":"ContainerStarted","Data":"a1403b3710ae765a0f83c022ac2575e8c1b6f7d087305557a48e9d57ad87994d"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.189617 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.191397 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerStarted","Data":"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.192023 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.192969 4995 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-zp6fr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.192998 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.193431 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.200779 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.204267 4995 generic.go:334] "Generic (PLEG): container finished" podID="cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26" containerID="ca28e7af992f7de335914ea87f9bbb5022d986d9dc7cdd971265c095169898fe" exitCode=0 Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.204364 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" event={"ID":"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26","Type":"ContainerDied","Data":"ca28e7af992f7de335914ea87f9bbb5022d986d9dc7cdd971265c095169898fe"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.207359 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.214353 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.215334 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerStarted","Data":"d0c5022bc8c220348c16ec918943e215fdb768799fd02e688e4e67a379a01657"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.215363 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerStarted","Data":"2eee7a2baa2b1a12547997b7a04cb2211ab293cccbc232a067832d6c82b6f518"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.216926 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerStarted","Data":"923e85ff5e2386df613ffe2279edd58b871c7d79968ab0c23e29f57d3983fbd9"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.219010 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" event={"ID":"d8feb049-3911-43fa-bd25-6ecee076d1ed","Type":"ContainerStarted","Data":"ff06d977cce16804ac970033cc5547543b42e97a49c256279e04328924fe630e"} Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.219402 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.250134 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.254013 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.261091 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.261231 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.761202814 +0000 UTC m=+93.925910289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.261319 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tsdjk" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.261923 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.262221 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.762206028 +0000 UTC m=+93.926913563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.276764 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.286977 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.304767 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-jr8qp"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.371380 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.372371 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.372609 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.378133 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.878083111 +0000 UTC m=+94.042790586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.378373 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.378760 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.878748557 +0000 UTC m=+94.043456022 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: W0126 23:09:49.432558 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ff36f00_70ac_4a9c_96f6_ade70040b187.slice/crio-9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4 WatchSource:0}: Error finding container 9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4: Status 404 returned error can't find the container with id 9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4 Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.480999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.484256 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:49.984226087 +0000 UTC m=+94.148933552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.520330 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" podStartSLOduration=71.520311143 podStartE2EDuration="1m11.520311143s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.519152265 +0000 UTC m=+93.683859730" watchObservedRunningTime="2026-01-26 23:09:49.520311143 +0000 UTC m=+93.685018608" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.583348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.584561 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.084544962 +0000 UTC m=+94.249252427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.599383 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-pfw4t" podStartSLOduration=72.599361272 podStartE2EDuration="1m12.599361272s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.595910268 +0000 UTC m=+93.760617743" watchObservedRunningTime="2026-01-26 23:09:49.599361272 +0000 UTC m=+93.764068737" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.685746 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.686158 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.186130018 +0000 UTC m=+94.350837493 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.690357 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.694579 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.775586 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.787805 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.788080 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.288068932 +0000 UTC m=+94.452776397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.823124 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2r7rc" podStartSLOduration=72.823095322 podStartE2EDuration="1m12.823095322s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.778354916 +0000 UTC m=+93.943062381" watchObservedRunningTime="2026-01-26 23:09:49.823095322 +0000 UTC m=+93.987802787" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.836843 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" podStartSLOduration=72.836825065 podStartE2EDuration="1m12.836825065s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.823778959 +0000 UTC m=+93.988486424" watchObservedRunningTime="2026-01-26 23:09:49.836825065 +0000 UTC m=+94.001532530" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.837803 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.878248 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.884611 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-klb9g" podStartSLOduration=72.884592235 podStartE2EDuration="1m12.884592235s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.884312858 +0000 UTC m=+94.049020323" watchObservedRunningTime="2026-01-26 23:09:49.884592235 +0000 UTC m=+94.049299720" Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.888438 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.888917 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.388901289 +0000 UTC m=+94.553608754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.896511 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw"] Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.925928 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kwqrx" podStartSLOduration=72.925904257 podStartE2EDuration="1m12.925904257s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.91943886 +0000 UTC m=+94.084146315" watchObservedRunningTime="2026-01-26 23:09:49.925904257 +0000 UTC m=+94.090611722" Jan 26 23:09:49 crc kubenswrapper[4995]: W0126 23:09:49.977027 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d4d9e36_8d49_41a8_a04b_194a5f652f94.slice/crio-0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350 WatchSource:0}: Error finding container 0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350: Status 404 returned error can't find the container with id 0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350 Jan 26 23:09:49 crc kubenswrapper[4995]: I0126 23:09:49.992555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:49 crc kubenswrapper[4995]: E0126 23:09:49.992875 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.492862262 +0000 UTC m=+94.657569737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.001855 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-hpqgt" podStartSLOduration=73.00183988 podStartE2EDuration="1m13.00183988s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:49.999666968 +0000 UTC m=+94.164374433" watchObservedRunningTime="2026-01-26 23:09:50.00183988 +0000 UTC m=+94.166547345" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.051692 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-s4cw2" podStartSLOduration=73.05167607 podStartE2EDuration="1m13.05167607s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.050930812 +0000 UTC m=+94.215638287" watchObservedRunningTime="2026-01-26 23:09:50.05167607 +0000 UTC m=+94.216383535" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.104274 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.105750 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.605720272 +0000 UTC m=+94.770427747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.158743 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-znswc" podStartSLOduration=73.158722088 podStartE2EDuration="1m13.158722088s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.135512425 +0000 UTC m=+94.300219890" watchObservedRunningTime="2026-01-26 23:09:50.158722088 +0000 UTC m=+94.323429553" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.177988 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wt84d"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.192801 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.212687 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.213046 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.713032336 +0000 UTC m=+94.877739801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.263127 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:50 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:50 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:50 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.263187 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.298671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" event={"ID":"75e69d02-9a6a-4bea-b3f5-1537ef5e2516","Type":"ContainerStarted","Data":"04199cb3f9efb2d7bb8fe668230301dc989ef5f0b9fcfab14e8a09e18ee33f31"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.298718 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" event={"ID":"75e69d02-9a6a-4bea-b3f5-1537ef5e2516","Type":"ContainerStarted","Data":"de6f50d204e5dc9547a3689aa868d0bb5d709a60df0715a724d272a4b031cf25"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.314241 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.314823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.814804246 +0000 UTC m=+94.979511711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.314837 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" event={"ID":"841a4225-c083-4025-bd1e-c6cd2ebf2b85","Type":"ContainerStarted","Data":"2179003cd7939a975598e3016ba0b03669026ac2c8692a0b6526139191f6dabc"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.331722 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gqbzs" podStartSLOduration=73.331705877 podStartE2EDuration="1m13.331705877s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.329506453 +0000 UTC m=+94.494213918" watchObservedRunningTime="2026-01-26 23:09:50.331705877 +0000 UTC m=+94.496413342" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.332352 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-pw55h"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.337713 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" event={"ID":"96211e14-9e17-4511-8523-609ff907f5c5","Type":"ContainerStarted","Data":"138e1b9eafa611ecccd5af5f57dd4b102b352744b3feb0f19c2d0dbc6d5c17ec"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.337774 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" event={"ID":"96211e14-9e17-4511-8523-609ff907f5c5","Type":"ContainerStarted","Data":"d140e4cfb51c291288bdc459112677398991ad415b55382743a58000ff18cafa"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.352082 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.352893 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.360139 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" event={"ID":"dedff685-1753-453d-a4ec-4e48b74cfdc4","Type":"ContainerStarted","Data":"16c116e2dcc97a59480ce16ed4abe4a87f4cd815400f757d482908a17bc8e17b"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.373872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" event={"ID":"ec91f390-afe7-440e-b452-3f0bd7e65862","Type":"ContainerStarted","Data":"ba6fdd27dee74df98e17201dbc91afb8e201e8ca3541029b31971982d4cc576c"} Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.373972 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41fedfb8_9381_43a2_8f78_2dea53ad7882.slice/crio-209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627 WatchSource:0}: Error finding container 209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627: Status 404 returned error can't find the container with id 209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.384474 4995 generic.go:334] "Generic (PLEG): container finished" podID="ee963cde-b7bc-4699-9b45-aaa3b7df0e38" containerID="9f45b7e58337e68bb27dec66942c772f61c0d530e6672fd4c8fe1efec8aaa2a3" exitCode=0 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.384799 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerDied","Data":"9f45b7e58337e68bb27dec66942c772f61c0d530e6672fd4c8fe1efec8aaa2a3"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.384829 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k4xnx"] Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.398015 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod466a813e_97dd_4113_b15c_1e0216edca40.slice/crio-e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9 WatchSource:0}: Error finding container e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9: Status 404 returned error can't find the container with id e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9 Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.398899 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09fe04fa_126d_4c84_948f_55b13dad9e24.slice/crio-339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6 WatchSource:0}: Error finding container 339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6: Status 404 returned error can't find the container with id 339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.399208 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" event={"ID":"e85666ee-5696-465c-9682-802e968660ec","Type":"ContainerStarted","Data":"6711724abcc04497430771ccde92985e3aa378bddde6826bf85e7a8b5846f861"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.415493 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.416823 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:50.916807412 +0000 UTC m=+95.081514867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.449767 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tw45t" podStartSLOduration=73.449750562 podStartE2EDuration="1m13.449750562s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.449723611 +0000 UTC m=+94.614431086" watchObservedRunningTime="2026-01-26 23:09:50.449750562 +0000 UTC m=+94.614458027" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.495262 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.499660 4995 generic.go:334] "Generic (PLEG): container finished" podID="321948cb-6f71-4375-b575-ee960cd49bc2" containerID="d0c5022bc8c220348c16ec918943e215fdb768799fd02e688e4e67a379a01657" exitCode=0 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.499723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerDied","Data":"d0c5022bc8c220348c16ec918943e215fdb768799fd02e688e4e67a379a01657"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.499749 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" event={"ID":"321948cb-6f71-4375-b575-ee960cd49bc2","Type":"ContainerStarted","Data":"f36e5f721faf9a70dfb86966046c5b8a1bdf9ed26f64168308ee1787e8bafa4a"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.500593 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.502387 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.517328 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.519944 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.019921575 +0000 UTC m=+95.184629030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.562784 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerStarted","Data":"0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.562836 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh"] Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.589047 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7943ea01_9b7a_4a9b_9b13_6ef8203dd43b.slice/crio-2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea WatchSource:0}: Error finding container 2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea: Status 404 returned error can't find the container with id 2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.590139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.591960 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" event={"ID":"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba","Type":"ContainerStarted","Data":"e12c2d1e4d04138049caecf9ae6ff19e5870afc1ee3de3084ebf8ae47b9bddcc"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.591985 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" event={"ID":"d8cf1992-8b5d-4b4a-a52a-8ce17ab5ddba","Type":"ContainerStarted","Data":"2c9ba8302b72d83c238c6a51c10bac7413f42197cd395baa6cd3b0c1f2856c8e"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.603035 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tsdjk" event={"ID":"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8","Type":"ContainerStarted","Data":"f0061fddd3128478f8e5c61bf68ad5a3522fb0ef86078f5d0b706cffc8c22d1b"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.603084 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tsdjk" event={"ID":"926aea35-dcee-4eb0-9b2b-9c7c95c11ae8","Type":"ContainerStarted","Data":"8f583dd5d40e1a11244a9041977046e68f749d9500439b3cae42253f19bc07fd"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.614227 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" event={"ID":"6ff36f00-70ac-4a9c-96f6-ade70040b187","Type":"ContainerStarted","Data":"9468e4366ef5f6d33a8eefb14db467bf11b044e6f88cf7ec9ac39d6a01a76fe4"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.620055 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8m6w4"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.621046 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.621350 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.121332736 +0000 UTC m=+95.286040201 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.702961 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerStarted","Data":"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.729202 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.729504 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.229470531 +0000 UTC m=+95.394177996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.729620 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.730773 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.230756252 +0000 UTC m=+95.395463797 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.735367 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x9shl"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.737912 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" podStartSLOduration=73.737890685 podStartE2EDuration="1m13.737890685s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:50.730232239 +0000 UTC m=+94.894939704" watchObservedRunningTime="2026-01-26 23:09:50.737890685 +0000 UTC m=+94.902598150" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.738216 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" event={"ID":"da8ddf95-03f1-4cce-8ddb-22ea3735eb59","Type":"ContainerStarted","Data":"f7f3602af6d83ee299345cf560dac252f50bc08d861a55ec7e4343edb9599215"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.745685 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" event={"ID":"053917dd-5476-46d8-b9d4-2a1433d86697","Type":"ContainerStarted","Data":"19ad45a549b4780560be836102a3911d01d8cfa0afeb9e847667e7997f8505d4"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.745724 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" event={"ID":"053917dd-5476-46d8-b9d4-2a1433d86697","Type":"ContainerStarted","Data":"a0e9db7c70df270c1aab27804f498df0ec21cbb1ef14b3800e9ad6e46c8502df"} Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.748077 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.748117 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.748975 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z"] Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.752211 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.753054 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-z4xpf"] Jan 26 23:09:50 crc kubenswrapper[4995]: W0126 23:09:50.775507 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3272988d_332d_4fe7_a794_c262bb6d8e11.slice/crio-6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6 WatchSource:0}: Error finding container 6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6: Status 404 returned error can't find the container with id 6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6 Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.803555 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.834302 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.835748 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.33572651 +0000 UTC m=+95.500433965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:50 crc kubenswrapper[4995]: I0126 23:09:50.936714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:50 crc kubenswrapper[4995]: E0126 23:09:50.937152 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.437131781 +0000 UTC m=+95.601839256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.006265 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-crsqt" podStartSLOduration=74.006244449 podStartE2EDuration="1m14.006244449s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.001619677 +0000 UTC m=+95.166327152" watchObservedRunningTime="2026-01-26 23:09:51.006244449 +0000 UTC m=+95.170951924" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.038191 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.038610 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.538591204 +0000 UTC m=+95.703298669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.140027 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.140514 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.640502477 +0000 UTC m=+95.805209942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.159973 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-zt9nn" podStartSLOduration=74.15995576 podStartE2EDuration="1m14.15995576s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.158635878 +0000 UTC m=+95.323343333" watchObservedRunningTime="2026-01-26 23:09:51.15995576 +0000 UTC m=+95.324663225" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.265082 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.266230 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.766205819 +0000 UTC m=+95.930913294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.273887 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" podStartSLOduration=74.273862854 podStartE2EDuration="1m14.273862854s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.271399365 +0000 UTC m=+95.436106850" watchObservedRunningTime="2026-01-26 23:09:51.273862854 +0000 UTC m=+95.438570319" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.310918 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:51 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:51 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:51 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.310982 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.366952 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.367412 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.867376554 +0000 UTC m=+96.032084069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.407601 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" podStartSLOduration=74.4075867 podStartE2EDuration="1m14.4075867s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.407063097 +0000 UTC m=+95.571770562" watchObservedRunningTime="2026-01-26 23:09:51.4075867 +0000 UTC m=+95.572294165" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.432432 4995 csr.go:261] certificate signing request csr-slcmk is approved, waiting to be issued Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.449965 4995 csr.go:257] certificate signing request csr-slcmk is issued Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.464488 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tw5mh" podStartSLOduration=74.464471561 podStartE2EDuration="1m14.464471561s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.462579475 +0000 UTC m=+95.627286940" watchObservedRunningTime="2026-01-26 23:09:51.464471561 +0000 UTC m=+95.629179026" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.467822 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.468224 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:51.968205331 +0000 UTC m=+96.132912796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.508438 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-4r5mm" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.539394 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tsdjk" podStartSLOduration=6.539379049 podStartE2EDuration="6.539379049s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.53900949 +0000 UTC m=+95.703716955" watchObservedRunningTime="2026-01-26 23:09:51.539379049 +0000 UTC m=+95.704086514" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.572053 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.572396 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.07238319 +0000 UTC m=+96.237090655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.675716 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.676269 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.176253841 +0000 UTC m=+96.340961306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.777615 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.777995 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.277982499 +0000 UTC m=+96.442689974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.779583 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" event={"ID":"e85666ee-5696-465c-9682-802e968660ec","Type":"ContainerStarted","Data":"5bfcbd075cde3aa81de266cf36c70ef7981337818ad10d4986e59d33b2b2eca7"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.783041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" event={"ID":"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b","Type":"ContainerStarted","Data":"adbdf0fe767678525a4521c890a3008330b01b81971d08f068bc228c05d82eb4"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.783070 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" event={"ID":"7943ea01-9b7a-4a9b-9b13-6ef8203dd43b","Type":"ContainerStarted","Data":"2841bd7c15f23eef690682da4492e255886f0039594cccf2057e57738628ddea"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.783804 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.795261 4995 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nglhh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" start-of-body= Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.795304 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" podUID="7943ea01-9b7a-4a9b-9b13-6ef8203dd43b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": dial tcp 10.217.0.25:5443: connect: connection refused" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.796183 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerStarted","Data":"a1b58f1c7c19e3271d8e92fc188032b01aa219cc41efeec1b600d96847739166"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.796215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerStarted","Data":"052973f6fc62d2870635d2389e1e0d1e76e71a306a0edffd354da85ca2cc2015"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.817829 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lrcb9" podStartSLOduration=74.817813376 podStartE2EDuration="1m14.817813376s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.815404607 +0000 UTC m=+95.980112072" watchObservedRunningTime="2026-01-26 23:09:51.817813376 +0000 UTC m=+95.982520841" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.823882 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" event={"ID":"c9544187-4d8b-4764-bfdb-067d6d6d06b4","Type":"ContainerStarted","Data":"9dbb95fddc9eac5cf0bf1fd19f34d55bb35056a926522f3d25044db55f895b3c"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.867009 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" event={"ID":"41fedfb8-9381-43a2-8f78-2dea53ad7882","Type":"ContainerStarted","Data":"3e759b12e81af7b0175ab715bf9ec94beacbe5d6a93e6fedb53a3dbf3e4469ed"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.867066 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" event={"ID":"41fedfb8-9381-43a2-8f78-2dea53ad7882","Type":"ContainerStarted","Data":"209ce7b3e6777cd9c1558c55470216a251eeab6294b0753924736c94cd89a627"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.878634 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.879942 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.379922923 +0000 UTC m=+96.544630398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.902395 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" podStartSLOduration=74.902372948 podStartE2EDuration="1m14.902372948s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.851252317 +0000 UTC m=+96.015959782" watchObservedRunningTime="2026-01-26 23:09:51.902372948 +0000 UTC m=+96.067080413" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.902744 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" podStartSLOduration=74.902735747 podStartE2EDuration="1m14.902735747s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:51.899695893 +0000 UTC m=+96.064403368" watchObservedRunningTime="2026-01-26 23:09:51.902735747 +0000 UTC m=+96.067443222" Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.958495 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" event={"ID":"480d13a8-eecc-4614-9b43-fd3fb5f28695","Type":"ContainerStarted","Data":"261acd69eb386ddaf479b9993cf8fbe37e59621b58cb52cf565e7595f52df018"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.972212 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" event={"ID":"da8ddf95-03f1-4cce-8ddb-22ea3735eb59","Type":"ContainerStarted","Data":"66b5b4292a2a7c23d32feaccd6011afd85110cf71d55ebb1e116577fe571d501"} Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.981921 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:51 crc kubenswrapper[4995]: E0126 23:09:51.982259 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.482247647 +0000 UTC m=+96.646955112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:51 crc kubenswrapper[4995]: I0126 23:09:51.995326 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" event={"ID":"053917dd-5476-46d8-b9d4-2a1433d86697","Type":"ContainerStarted","Data":"ff78c88e4de9e3986b9995d8728e4636db331afa8f5bcb23a1f0ae74ab076d20"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.019436 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llpkl" podStartSLOduration=75.019415149 podStartE2EDuration="1m15.019415149s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.019417139 +0000 UTC m=+96.184124604" watchObservedRunningTime="2026-01-26 23:09:52.019415149 +0000 UTC m=+96.184122614" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.048308 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8m6w4" event={"ID":"475d4d77-5500-4d9d-8d5f-c9fe0f47364b","Type":"ContainerStarted","Data":"c52cdb7f64b66074ef00da2a8e7641edd06c5d26c86beecfa122f21f916d388f"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.048365 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8m6w4" event={"ID":"475d4d77-5500-4d9d-8d5f-c9fe0f47364b","Type":"ContainerStarted","Data":"2cdc2a411873965984b12b626c218baa7413ee8cd7c72cdfd4e50a4b4aec5e30"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.094368 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2f7qc" podStartSLOduration=75.094349558 podStartE2EDuration="1m15.094349558s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.062486114 +0000 UTC m=+96.227193589" watchObservedRunningTime="2026-01-26 23:09:52.094349558 +0000 UTC m=+96.259057023" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.097203 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.098315 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.598301034 +0000 UTC m=+96.763008499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.104701 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" event={"ID":"119edb68-a6b6-4bdf-9f74-c14211a24ecd","Type":"ContainerStarted","Data":"be2e0b08ab75d269003ff41dcd04a1efa5e2629a82770cafe9dee6ae2f712209"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.106496 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerStarted","Data":"f6a1dfaeb088a79efe2e9257f16373638bb8bbade02656c5af4a85ba1c062d9c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.131937 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" event={"ID":"cc6a3bc8-3e8b-4bff-8978-c73fc1d90c26","Type":"ContainerStarted","Data":"eca71026dc9f3b3dbe01cbb7bd01b700fb9a8d49cb36d1ff176aa5ee7d254757"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.152445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerStarted","Data":"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.152655 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.169716 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" podStartSLOduration=74.169697717 podStartE2EDuration="1m14.169697717s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.168260752 +0000 UTC m=+96.332968217" watchObservedRunningTime="2026-01-26 23:09:52.169697717 +0000 UTC m=+96.334405182" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.170932 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8m6w4" podStartSLOduration=7.170921936 podStartE2EDuration="7.170921936s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.096680814 +0000 UTC m=+96.261388279" watchObservedRunningTime="2026-01-26 23:09:52.170921936 +0000 UTC m=+96.335629411" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.173415 4995 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-tzh2d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.173484 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.175196 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" event={"ID":"466a813e-97dd-4113-b15c-1e0216edca40","Type":"ContainerStarted","Data":"05dc86749f9dd2e3990900dd850ea9416fc306818b1bec95ac6a6321744177cc"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.175226 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" event={"ID":"466a813e-97dd-4113-b15c-1e0216edca40","Type":"ContainerStarted","Data":"e13291e7d158b20d4d8ea0898c207b562a9dfe885da9f3620d7931d70a7400b9"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.192483 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" event={"ID":"09fe04fa-126d-4c84-948f-55b13dad9e24","Type":"ContainerStarted","Data":"4ffe565df59b5bc79fbfbed2c58fd33ab405ab72849f847128ce3ad63c0cf89c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.192525 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" event={"ID":"09fe04fa-126d-4c84-948f-55b13dad9e24","Type":"ContainerStarted","Data":"339919c85c25ea1683f5033f117ddf1ac344477c80f1687b1b322a624fa546d6"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.194504 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.198492 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.200875 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.700854853 +0000 UTC m=+96.865562368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.206983 4995 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8llf9 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.207049 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" podUID="09fe04fa-126d-4c84-948f-55b13dad9e24" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.213337 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"2b7607e6165bf1d4a4546b4d547ad357b02e5110c760b8b5bf5d9c5983e7b8db"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.236049 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" event={"ID":"841a4225-c083-4025-bd1e-c6cd2ebf2b85","Type":"ContainerStarted","Data":"caa42be5399fec9cdf7e677d8727f49170bcdd061bffde68b0a8fbd63bbf5777"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.262363 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt84d" event={"ID":"ab1b8e08-3212-4197-a8e7-db12babb6414","Type":"ContainerStarted","Data":"dc24e5e0d1a418b24121e95a5c16f151f64b7a6516ad40a04a0f83b716c02a5c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.262409 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt84d" event={"ID":"ab1b8e08-3212-4197-a8e7-db12babb6414","Type":"ContainerStarted","Data":"c987d061c01df87edfd392b564ecafbeb38a05d30db68fe947bd28cb3a945eee"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.265359 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:52 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:52 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:52 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.265422 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.284402 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-jr8qp" event={"ID":"6ff36f00-70ac-4a9c-96f6-ade70040b187","Type":"ContainerStarted","Data":"8d662873249ff4c19879c2de852c98e29aa88240e0f38b5a5e3455df51ddb9ce"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.305357 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.307016 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.806994229 +0000 UTC m=+96.971701694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.309124 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" event={"ID":"ec91f390-afe7-440e-b452-3f0bd7e65862","Type":"ContainerStarted","Data":"177e8122a91f7debb79e36811c28cd0d462e25e0266e9ce5472208d8ab56d59e"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.309352 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" event={"ID":"ec91f390-afe7-440e-b452-3f0bd7e65862","Type":"ContainerStarted","Data":"32e5b0c926e714f88c49bb8f0d89c57260d146538e5aecf7e4305a05024e9d91"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.347537 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerStarted","Data":"f901f601e0243ea0adb58f7b81260269e5e87406c390fbde6045e9147797112d"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.347753 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" podStartSLOduration=75.347732438 podStartE2EDuration="1m15.347732438s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.235679498 +0000 UTC m=+96.400386973" watchObservedRunningTime="2026-01-26 23:09:52.347732438 +0000 UTC m=+96.512439923" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.348693 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.348802 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" podStartSLOduration=75.348794944 podStartE2EDuration="1m15.348794944s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.319995945 +0000 UTC m=+96.484703410" watchObservedRunningTime="2026-01-26 23:09:52.348794944 +0000 UTC m=+96.513502409" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.355507 4995 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phjts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.355570 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.358660 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hdscw" podStartSLOduration=75.358630172 podStartE2EDuration="1m15.358630172s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.349244394 +0000 UTC m=+96.513951859" watchObservedRunningTime="2026-01-26 23:09:52.358630172 +0000 UTC m=+96.523337667" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.360638 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.360703 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.371641 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" event={"ID":"dedff685-1753-453d-a4ec-4e48b74cfdc4","Type":"ContainerStarted","Data":"45a093783232fa31c18d58ec566f319c5ea1702bc8113b7bd398c26390a146b8"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.371698 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" event={"ID":"dedff685-1753-453d-a4ec-4e48b74cfdc4","Type":"ContainerStarted","Data":"2d33d284fe3cdd9b402299d5d737ea07c24cd58a2484eae421d6bdc615797f5d"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.386205 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zpckj" podStartSLOduration=75.386179041 podStartE2EDuration="1m15.386179041s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.382773158 +0000 UTC m=+96.547480633" watchObservedRunningTime="2026-01-26 23:09:52.386179041 +0000 UTC m=+96.550886506" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.387546 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" event={"ID":"8d0941b6-29be-464b-91b9-ecd2e8545dc0","Type":"ContainerStarted","Data":"347123cb30c0a17c4092f409f11db531a9b1ac23288bc2d3d58b8a9825e46ee9"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.387581 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" event={"ID":"8d0941b6-29be-464b-91b9-ecd2e8545dc0","Type":"ContainerStarted","Data":"dd0bf1ef65dc2b9ea2219ffc20c2e478ee6397d48e2bc37b205576281b56c88c"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.407043 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.408658 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:52.908642656 +0000 UTC m=+97.073350121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.421805 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" event={"ID":"3272988d-332d-4fe7-a794-c262bb6d8e11","Type":"ContainerStarted","Data":"6cef76c6127e4c063afdb02cd7a9a795f6f3131e95f45a30b362badda63d90f6"} Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.421967 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podStartSLOduration=74.421956949 podStartE2EDuration="1m14.421956949s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.419984501 +0000 UTC m=+96.584691966" watchObservedRunningTime="2026-01-26 23:09:52.421956949 +0000 UTC m=+96.586664414" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.423753 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.423803 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.451168 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 23:04:51 +0000 UTC, rotation deadline is 2026-10-25 14:03:26.76835503 +0000 UTC Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.451226 4995 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6518h53m34.31713222s for next certificate rotation Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.472355 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kl2g4" podStartSLOduration=75.472334802 podStartE2EDuration="1m15.472334802s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.466281385 +0000 UTC m=+96.630988850" watchObservedRunningTime="2026-01-26 23:09:52.472334802 +0000 UTC m=+96.637042267" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.509579 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.514026 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.014004153 +0000 UTC m=+97.178711698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.523186 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-g66hh" podStartSLOduration=75.523169286 podStartE2EDuration="1m15.523169286s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.521780102 +0000 UTC m=+96.686487567" watchObservedRunningTime="2026-01-26 23:09:52.523169286 +0000 UTC m=+96.687876761" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.545495 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dh55c" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.593916 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" podStartSLOduration=74.593894622 podStartE2EDuration="1m14.593894622s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.588945652 +0000 UTC m=+96.753653117" watchObservedRunningTime="2026-01-26 23:09:52.593894622 +0000 UTC m=+96.758602097" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.614286 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.614650 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.114635236 +0000 UTC m=+97.279342701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.641957 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-7txcz" podStartSLOduration=75.641933738 podStartE2EDuration="1m15.641933738s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:52.63251219 +0000 UTC m=+96.797219655" watchObservedRunningTime="2026-01-26 23:09:52.641933738 +0000 UTC m=+96.806641203" Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.717685 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.718038 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.218020775 +0000 UTC m=+97.382728240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.819327 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.819689 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.319673742 +0000 UTC m=+97.484381207 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.920300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:52 crc kubenswrapper[4995]: E0126 23:09:52.920669 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.420651123 +0000 UTC m=+97.585358588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:52 crc kubenswrapper[4995]: I0126 23:09:52.971376 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.021741 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.022208 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.522195928 +0000 UTC m=+97.686903393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.123117 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.123305 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.623277731 +0000 UTC m=+97.787985196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.123437 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.123759 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.623746253 +0000 UTC m=+97.788453718 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.224634 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.724610211 +0000 UTC m=+97.889317676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.224674 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.225019 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.225378 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.725367779 +0000 UTC m=+97.890075244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.256772 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:53 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:53 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:53 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.257167 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.326208 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.326633 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.826582326 +0000 UTC m=+97.991289811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.326743 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.327197 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.82717898 +0000 UTC m=+97.991886445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.428511 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.428716 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.928690944 +0000 UTC m=+98.093398409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.428853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.429220 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:53.929208987 +0000 UTC m=+98.093916452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.436372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" event={"ID":"466a813e-97dd-4113-b15c-1e0216edca40","Type":"ContainerStarted","Data":"c76f7d82e7032dc4b3ac8bf3044be581dde5e4f417e82edf564a451e3cfc7f1a"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.439680 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" event={"ID":"c9544187-4d8b-4764-bfdb-067d6d6d06b4","Type":"ContainerStarted","Data":"5e58a6ce93af4848462a80240a626a91fc4d1f5d8ac87abf28e4213177300903"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.443341 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" event={"ID":"119edb68-a6b6-4bdf-9f74-c14211a24ecd","Type":"ContainerStarted","Data":"b8a1fdab4fb968bb12d219fe85fc09650cec7cf98aa079983715ceb8a4e74fd7"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.443386 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" event={"ID":"119edb68-a6b6-4bdf-9f74-c14211a24ecd","Type":"ContainerStarted","Data":"4611d303439c93e0dbfec5fc69dabc2d2ec7db08bdcbfcd2b14b6ece9d0d16ce"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.443555 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.451059 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" event={"ID":"480d13a8-eecc-4614-9b43-fd3fb5f28695","Type":"ContainerStarted","Data":"51322341039e5bef7815e1c3fcee6f14bc93a4c4e2b1ce327c1c5c4c4e3d0ee1"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.452167 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.460672 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wt84d" event={"ID":"ab1b8e08-3212-4197-a8e7-db12babb6414","Type":"ContainerStarted","Data":"016463c4e561813efadc381670a8731264d6a825a500650bfd514d2f13e1b85e"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.461406 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wt84d" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.463473 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerStarted","Data":"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.464775 4995 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phjts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.464812 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.468179 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"db9dedffe35b3e14d9b890a473e94becd65659f6848fe2b495fae97ea15495f8"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.471655 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" event={"ID":"41fedfb8-9381-43a2-8f78-2dea53ad7882","Type":"ContainerStarted","Data":"2dcf22460f6e4ed87e1c53aba10862935a62e364f9ed56650a002a271b8a9cf2"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.484320 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fk27l" podStartSLOduration=76.484299574 podStartE2EDuration="1m16.484299574s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.469077975 +0000 UTC m=+97.633785440" watchObservedRunningTime="2026-01-26 23:09:53.484299574 +0000 UTC m=+97.649007039" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.493601 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-v665q" event={"ID":"ee963cde-b7bc-4699-9b45-aaa3b7df0e38","Type":"ContainerStarted","Data":"5059d7545e626d7ede92cdac2582a6bca1d3973652183fc2821a373d3b81e3ff"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.501902 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-z4xpf" event={"ID":"3272988d-332d-4fe7-a794-c262bb6d8e11","Type":"ContainerStarted","Data":"0cb60e301035a827dcf1c548a706e468599bc733dfaaa2f95fd3d095cad45f22"} Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.518222 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.522178 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8llf9" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.530885 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.533024 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.032994636 +0000 UTC m=+98.197702111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.533783 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vmkbr" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.534662 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zsl8z" podStartSLOduration=76.534640426 podStartE2EDuration="1m16.534640426s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.517668274 +0000 UTC m=+97.682375749" watchObservedRunningTime="2026-01-26 23:09:53.534640426 +0000 UTC m=+97.699347891" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.571045 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x9shl" podStartSLOduration=75.571030059 podStartE2EDuration="1m15.571030059s" podCreationTimestamp="2026-01-26 23:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.570585018 +0000 UTC m=+97.735292493" watchObservedRunningTime="2026-01-26 23:09:53.571030059 +0000 UTC m=+97.735737524" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.572226 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.637789 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wt84d" podStartSLOduration=8.637773598999999 podStartE2EDuration="8.637773599s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.627134031 +0000 UTC m=+97.791841496" watchObservedRunningTime="2026-01-26 23:09:53.637773599 +0000 UTC m=+97.802481064" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.643984 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.649820 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.149805231 +0000 UTC m=+98.314512696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.696527 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" podStartSLOduration=76.696504595 podStartE2EDuration="1m16.696504595s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.679661416 +0000 UTC m=+97.844368891" watchObservedRunningTime="2026-01-26 23:09:53.696504595 +0000 UTC m=+97.861212060" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.732001 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-pw55h" podStartSLOduration=76.731983936 podStartE2EDuration="1m16.731983936s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.729535896 +0000 UTC m=+97.894243361" watchObservedRunningTime="2026-01-26 23:09:53.731983936 +0000 UTC m=+97.896691401" Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.746607 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.747127 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.247089642 +0000 UTC m=+98.411797117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.851598 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.851951 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.351933677 +0000 UTC m=+98.516641142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.957667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:53 crc kubenswrapper[4995]: E0126 23:09:53.958053 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.458037432 +0000 UTC m=+98.622744897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:53 crc kubenswrapper[4995]: I0126 23:09:53.989847 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-v665q" podStartSLOduration=76.989824494 podStartE2EDuration="1m16.989824494s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:53.986583405 +0000 UTC m=+98.151290870" watchObservedRunningTime="2026-01-26 23:09:53.989824494 +0000 UTC m=+98.154531969" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.061195 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.061728 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.561707759 +0000 UTC m=+98.726415264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.162543 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.162726 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.662678659 +0000 UTC m=+98.827386124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.162917 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.163334 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.663322665 +0000 UTC m=+98.828030130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.258230 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:54 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:54 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:54 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.258280 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.264074 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.264286 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.764236434 +0000 UTC m=+98.928943899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.264404 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.264803 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.764791478 +0000 UTC m=+98.929498963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.365606 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.365753 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.865724538 +0000 UTC m=+99.030432013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.366064 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.366414 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.866401924 +0000 UTC m=+99.031109439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.467492 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.467664 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.967638281 +0000 UTC m=+99.132345746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.467792 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.468134 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:54.968121173 +0000 UTC m=+99.132828638 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.502772 4995 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-nglhh container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.502874 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" podUID="7943ea01-9b7a-4a9b-9b13-6ef8203dd43b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.25:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.509343 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"854c65a3bfabc7f2794f203aa6c37b4a6cf9d9e8fd8618fb94935ccaf11827d4"} Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.509383 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"341fe03dc3df1280b812fa605335212c613d14c06e4acdb6a55481a1b888361c"} Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.510169 4995 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-phjts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.510211 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.569025 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.569267 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.069240797 +0000 UTC m=+99.233948262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.569750 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.571298 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.071285217 +0000 UTC m=+99.235992682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.630138 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-nglhh" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.673536 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.673769 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.173742684 +0000 UTC m=+99.338450149 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.673956 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.674305 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.174292097 +0000 UTC m=+99.338999562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.774868 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.775048 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.275022762 +0000 UTC m=+99.439730227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.775174 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.775593 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.275579885 +0000 UTC m=+99.440287360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.784794 4995 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.876193 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.876411 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.376378882 +0000 UTC m=+99.541086347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.876690 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.876987 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.376975416 +0000 UTC m=+99.541682871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:54 crc kubenswrapper[4995]: I0126 23:09:54.978012 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:54 crc kubenswrapper[4995]: E0126 23:09:54.978598 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.478583322 +0000 UTC m=+99.643290787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.080318 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.080693 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.58067795 +0000 UTC m=+99.745385415 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.181699 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.182077 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.682060491 +0000 UTC m=+99.846767966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.256285 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.257196 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.259320 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.260210 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:55 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:55 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:55 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.260259 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.282702 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.283025 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.783009391 +0000 UTC m=+99.947716846 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.315724 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.383884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.384063 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.884037892 +0000 UTC m=+100.048745357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384181 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384240 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384355 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.384396 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.384647 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.884639246 +0000 UTC m=+100.049346701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.455386 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.456215 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.458057 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.465203 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485560 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485839 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485913 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485934 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.485977 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.486566 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 23:09:55.98653594 +0000 UTC m=+100.151243405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.486857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.487170 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.492855 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4-metrics-certs\") pod \"network-metrics-daemon-vlmfg\" (UID: \"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4\") " pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.502646 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"community-operators-6wf22\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.516853 4995 generic.go:334] "Generic (PLEG): container finished" podID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerID="a1b58f1c7c19e3271d8e92fc188032b01aa219cc41efeec1b600d96847739166" exitCode=0 Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.516904 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerDied","Data":"a1b58f1c7c19e3271d8e92fc188032b01aa219cc41efeec1b600d96847739166"} Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.518807 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" event={"ID":"3ff2e751-87a0-4a72-ac26-5c1aa5d7d0ae","Type":"ContainerStarted","Data":"0590f49f70ec5bb7f49bc800e9731d7932646b95b6b71338d271b90ff8efccfc"} Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.551386 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k4xnx" podStartSLOduration=10.551372283 podStartE2EDuration="10.551372283s" podCreationTimestamp="2026-01-26 23:09:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:55.54999368 +0000 UTC m=+99.714701145" watchObservedRunningTime="2026-01-26 23:09:55.551372283 +0000 UTC m=+99.716079748" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.575708 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.592827 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.592867 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.594200 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.594273 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: E0126 23:09:55.605389 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 23:09:56.105346483 +0000 UTC m=+100.270053948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-hjxrn" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.637477 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vlmfg" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.644167 4995 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T23:09:54.78481741Z","Handler":null,"Name":""} Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.647417 4995 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.647461 4995 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.657322 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.662543 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.671390 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.703877 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.704356 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.704462 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.704544 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.705006 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.705474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.708577 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.721899 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"certified-operators-8z855\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.772668 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.802199 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805355 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805442 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805460 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.805479 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: W0126 23:09:55.809796 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58513b5e_460e_4344_91e3_1d20e26fd533.slice/crio-f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7 WatchSource:0}: Error finding container f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7: Status 404 returned error can't find the container with id f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7 Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.813207 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.813252 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.854064 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.855409 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.868571 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.879360 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-hjxrn\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.906555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.906589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.906609 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.907195 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.907465 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.915430 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vlmfg"] Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.925492 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"community-operators-q6mtp\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:55 crc kubenswrapper[4995]: I0126 23:09:55.980161 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.008080 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.008156 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.008229 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.019065 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.026648 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.109889 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.109972 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.109992 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.110571 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.110644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.127352 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"certified-operators-7ptdh\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.181701 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.229836 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.257058 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:56 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:56 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:56 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.257135 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.291509 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.419499 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.527730 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.531220 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" exitCode=0 Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.531400 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.531458 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerStarted","Data":"a9d19028654a4b4f323d0e8da8ba08742825da3af7b48d707205e793ef542ae5"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.536287 4995 generic.go:334] "Generic (PLEG): container finished" podID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" exitCode=0 Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.536338 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.536372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerStarted","Data":"20ff719d1a611af55cb9cea51a19289e5f98717222c932e73ed4f4672c8a5fcb"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.543181 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.543764 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" event={"ID":"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4","Type":"ContainerStarted","Data":"77a0650a9a37800b30025eaa5c17f734f4cf3685d82638b32ea776da6a52ebb1"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.543800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" event={"ID":"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4","Type":"ContainerStarted","Data":"8116ceea19f379c95631b0c94377eb4636083008b47db384584120c2df5f151d"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.546613 4995 generic.go:334] "Generic (PLEG): container finished" podID="58513b5e-460e-4344-91e3-1d20e26fd533" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" exitCode=0 Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.546671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.546704 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerStarted","Data":"f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.550059 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerStarted","Data":"5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.550084 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerStarted","Data":"c0781d7b5c2499fcb553527a8fd295fe436cb8680c543a89922297ff4d9b554f"} Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.550296 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.711520 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" podStartSLOduration=79.711497221 podStartE2EDuration="1m19.711497221s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:56.705806403 +0000 UTC m=+100.870513868" watchObservedRunningTime="2026-01-26 23:09:56.711497221 +0000 UTC m=+100.876204686" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.836500 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.927807 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") pod \"7de4fe23-2da4-47df-a68b-d6d5148ab964\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.927925 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") pod \"7de4fe23-2da4-47df-a68b-d6d5148ab964\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.927974 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") pod \"7de4fe23-2da4-47df-a68b-d6d5148ab964\" (UID: \"7de4fe23-2da4-47df-a68b-d6d5148ab964\") " Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.928659 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume" (OuterVolumeSpecName: "config-volume") pod "7de4fe23-2da4-47df-a68b-d6d5148ab964" (UID: "7de4fe23-2da4-47df-a68b-d6d5148ab964"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.935914 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9" (OuterVolumeSpecName: "kube-api-access-77hg9") pod "7de4fe23-2da4-47df-a68b-d6d5148ab964" (UID: "7de4fe23-2da4-47df-a68b-d6d5148ab964"). InnerVolumeSpecName "kube-api-access-77hg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:09:56 crc kubenswrapper[4995]: I0126 23:09:56.936007 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7de4fe23-2da4-47df-a68b-d6d5148ab964" (UID: "7de4fe23-2da4-47df-a68b-d6d5148ab964"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.029913 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de4fe23-2da4-47df-a68b-d6d5148ab964-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.030281 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7de4fe23-2da4-47df-a68b-d6d5148ab964-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.030291 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77hg9\" (UniqueName: \"kubernetes.io/projected/7de4fe23-2da4-47df-a68b-d6d5148ab964-kube-api-access-77hg9\") on node \"crc\" DevicePath \"\"" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.038682 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 23:09:57 crc kubenswrapper[4995]: E0126 23:09:57.038899 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerName="collect-profiles" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.038911 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerName="collect-profiles" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.038999 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" containerName="collect-profiles" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.039391 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.044023 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.044277 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.082763 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.131053 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.131275 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.178980 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.232370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.232465 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.232484 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.257657 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:57 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:57 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:57 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.257767 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.258194 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.368303 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.454670 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.455814 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.461437 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.464016 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.518598 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.518956 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.518598 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.519218 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.535931 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.536000 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.536045 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.572049 4995 generic.go:334] "Generic (PLEG): container finished" podID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" exitCode=0 Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.572139 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.572166 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerStarted","Data":"20edf153be996b3cf630c557f436ea3736b0f71a5fce8a127880088910f8cf24"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.585392 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vlmfg" event={"ID":"4a08a29c-eeec-4ab8-a7f7-8a21f50fe6c4","Type":"ContainerStarted","Data":"a676ce9e45e110b934eacc0ed00833fb54699f6e8cba6d363a94925b526491d1"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.607189 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.610307 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv" event={"ID":"7de4fe23-2da4-47df-a68b-d6d5148ab964","Type":"ContainerDied","Data":"052973f6fc62d2870635d2389e1e0d1e76e71a306a0edffd354da85ca2cc2015"} Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.610346 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="052973f6fc62d2870635d2389e1e0d1e76e71a306a0edffd354da85ca2cc2015" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.621413 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vlmfg" podStartSLOduration=80.621366435 podStartE2EDuration="1m20.621366435s" podCreationTimestamp="2026-01-26 23:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:09:57.619864339 +0000 UTC m=+101.784571814" watchObservedRunningTime="2026-01-26 23:09:57.621366435 +0000 UTC m=+101.786073900" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.637637 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.637720 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.637767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.638290 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.640230 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.657796 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"redhat-marketplace-px4t9\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.683884 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.791514 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.851584 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.852746 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.866249 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.941897 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.942292 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:57 crc kubenswrapper[4995]: I0126 23:09:57.942463 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.046769 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.046865 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.046931 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.047559 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.048712 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.092640 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"redhat-marketplace-9wv6w\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.135444 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.187317 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.252616 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.256126 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:58 crc kubenswrapper[4995]: [-]has-synced failed: reason withheld Jan 26 23:09:58 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:58 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.256182 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.457311 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.458241 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.458264 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.458574 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.463783 4995 patch_prober.go:28] interesting pod/console-f9d7485db-zt9nn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.463866 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-zt9nn" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" probeResult="failure" output="Get \"https://10.217.0.35:8443/health\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.463952 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.474089 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.479420 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.479463 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.494024 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.583025 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.583084 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.583170 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.614113 4995 generic.go:334] "Generic (PLEG): container finished" podID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerID="85a6ad96b2e4587219604eaba4bfd026549d008f8e0ae682f8638f5bace71ac2" exitCode=0 Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.614421 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65","Type":"ContainerDied","Data":"85a6ad96b2e4587219604eaba4bfd026549d008f8e0ae682f8638f5bace71ac2"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.614451 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65","Type":"ContainerStarted","Data":"265077d657bfb7c86ffcaaca72051ddeb65fd20c0b30c89bc3a9372759d0789f"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.629078 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerStarted","Data":"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.629129 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerStarted","Data":"2791ea2f560df413a781ffdcf254d63067a2528c47ab19f2d416f080d3de6868"} Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.638262 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-v665q" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684194 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684261 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684299 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.684866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.686269 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.761420 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"redhat-operators-wq2hm\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.788010 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.856210 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.861548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.863499 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.872303 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.990979 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.991449 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:58 crc kubenswrapper[4995]: I0126 23:09:58.991675 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.092834 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.093277 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.093318 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.093896 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.094076 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.138471 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"redhat-operators-tkghc\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.202430 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.221403 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.227276 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.261766 4995 patch_prober.go:28] interesting pod/router-default-5444994796-tw45t container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 23:09:59 crc kubenswrapper[4995]: [+]has-synced ok Jan 26 23:09:59 crc kubenswrapper[4995]: [+]process-running ok Jan 26 23:09:59 crc kubenswrapper[4995]: healthz check failed Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.261835 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tw45t" podUID="24dc4d5e-e13d-4d4d-b1f8-390149f24544" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 23:09:59 crc kubenswrapper[4995]: W0126 23:09:59.267990 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5166d9b5_534e_4426_8085_a1900c7bdafb.slice/crio-e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd WatchSource:0}: Error finding container e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd: Status 404 returned error can't find the container with id e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.636635 4995 generic.go:334] "Generic (PLEG): container finished" podID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" exitCode=0 Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.636713 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.639258 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerStarted","Data":"e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.640815 4995 generic.go:334] "Generic (PLEG): container finished" podID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" exitCode=0 Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.641504 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.641523 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerStarted","Data":"a0635a7bf961355bc048d1c04e92285d7c8c240f172e625a758ab7fa01b816d1"} Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.730166 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:09:59 crc kubenswrapper[4995]: W0126 23:09:59.769053 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cf84b12_2476_4bdf_92f2_016c722f74b5.slice/crio-d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29 WatchSource:0}: Error finding container d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29: Status 404 returned error can't find the container with id d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29 Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.804846 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.807677 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.809997 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.811510 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.811621 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.908855 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.909064 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:09:59 crc kubenswrapper[4995]: I0126 23:09:59.973333 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.010295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.010370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.010476 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.046776 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.111664 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") pod \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.111754 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") pod \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\" (UID: \"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65\") " Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.111885 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" (UID: "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.112299 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.115133 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" (UID: "66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.133385 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.213593 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.256563 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.260299 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tw45t" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.637142 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.652185 4995 generic.go:334] "Generic (PLEG): container finished" podID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" exitCode=0 Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.652264 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.661620 4995 generic.go:334] "Generic (PLEG): container finished" podID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" exitCode=0 Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.661706 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.661741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerStarted","Data":"d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.668191 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65","Type":"ContainerDied","Data":"265077d657bfb7c86ffcaaca72051ddeb65fd20c0b30c89bc3a9372759d0789f"} Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.668241 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="265077d657bfb7c86ffcaaca72051ddeb65fd20c0b30c89bc3a9372759d0789f" Jan 26 23:10:00 crc kubenswrapper[4995]: I0126 23:10:00.668204 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 23:10:01 crc kubenswrapper[4995]: I0126 23:10:01.706972 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerStarted","Data":"975487f2a635a929bba403332114f06fc9e164d81ecc0aa07e72ed358806c284"} Jan 26 23:10:01 crc kubenswrapper[4995]: I0126 23:10:01.707309 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerStarted","Data":"55757148a77a95d861771d7e26e070797711057e63a4fc15d0c0698103b9e006"} Jan 26 23:10:01 crc kubenswrapper[4995]: I0126 23:10:01.726446 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.726425162 podStartE2EDuration="2.726425162s" podCreationTimestamp="2026-01-26 23:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:10:01.725572931 +0000 UTC m=+105.890280406" watchObservedRunningTime="2026-01-26 23:10:01.726425162 +0000 UTC m=+105.891132627" Jan 26 23:10:02 crc kubenswrapper[4995]: I0126 23:10:02.717779 4995 generic.go:334] "Generic (PLEG): container finished" podID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerID="975487f2a635a929bba403332114f06fc9e164d81ecc0aa07e72ed358806c284" exitCode=0 Jan 26 23:10:02 crc kubenswrapper[4995]: I0126 23:10:02.718167 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerDied","Data":"975487f2a635a929bba403332114f06fc9e164d81ecc0aa07e72ed358806c284"} Jan 26 23:10:03 crc kubenswrapper[4995]: I0126 23:10:03.975324 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wt84d" Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517004 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517285 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517022 4995 patch_prober.go:28] interesting pod/downloads-7954f5f757-pfw4t container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Jan 26 23:10:07 crc kubenswrapper[4995]: I0126 23:10:07.517705 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-pfw4t" podUID="ce7a362e-896b-4492-ac2c-08bd19bba7b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Jan 26 23:10:08 crc kubenswrapper[4995]: I0126 23:10:08.604784 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:10:08 crc kubenswrapper[4995]: I0126 23:10:08.620752 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.532783 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.577214 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") pod \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.577274 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") pod \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\" (UID: \"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f\") " Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.577665 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" (UID: "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.582812 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" (UID: "dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.680596 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.680664 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.814742 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f","Type":"ContainerDied","Data":"55757148a77a95d861771d7e26e070797711057e63a4fc15d0c0698103b9e006"} Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.814790 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55757148a77a95d861771d7e26e070797711057e63a4fc15d0c0698103b9e006" Jan 26 23:10:11 crc kubenswrapper[4995]: I0126 23:10:11.814851 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 23:10:16 crc kubenswrapper[4995]: I0126 23:10:16.032085 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:10:17 crc kubenswrapper[4995]: I0126 23:10:17.521469 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-pfw4t" Jan 26 23:10:29 crc kubenswrapper[4995]: I0126 23:10:29.197658 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-zbzdl" Jan 26 23:10:33 crc kubenswrapper[4995]: E0126 23:10:33.774658 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 23:10:33 crc kubenswrapper[4995]: E0126 23:10:33.774938 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df626,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-wq2hm_openshift-marketplace(5166d9b5-534e-4426-8085-a1900c7bdafb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:33 crc kubenswrapper[4995]: E0126 23:10:33.776263 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-wq2hm" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.328856 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-wq2hm" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.417635 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.418172 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p76nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7ptdh_openshift-marketplace(869a6dc6-8120-4a1c-b424-1a06738aa55e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:35 crc kubenswrapper[4995]: E0126 23:10:35.419427 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7ptdh" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.197157 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 23:10:36 crc kubenswrapper[4995]: E0126 23:10:36.197566 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.197668 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: E0126 23:10:36.197742 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.197817 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.198012 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd9c0cd7-b1b3-4660-9b2f-3efcd3a0d61f" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.198135 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b1fa76-16ef-4daf-ba5c-57ab0e8f7a65" containerName="pruner" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.198631 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.206067 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.206161 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.212231 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.340973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.341036 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.442737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.442878 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.443335 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.465366 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:36 crc kubenswrapper[4995]: I0126 23:10:36.553392 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.159516 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7ptdh" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.229854 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.230036 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxr7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-q6mtp_openshift-marketplace(6aacdfb4-d893-49a9-ae77-a150f1c0a430): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.231545 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-q6mtp" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.349177 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.349579 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c27fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-px4t9_openshift-marketplace(38be674d-6ae2-441d-b361-a9eea3b694a7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.350912 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-px4t9" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.370864 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.370991 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbb4m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-8z855_openshift-marketplace(b7295e1f-e3cb-4710-8763-b02b3e9ed67b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.372239 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-8z855" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.429072 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.429272 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xbvbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6wf22_openshift-marketplace(58513b5e-460e-4344-91e3-1d20e26fd533): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 23:10:37 crc kubenswrapper[4995]: E0126 23:10:37.430477 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6wf22" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" Jan 26 23:10:37 crc kubenswrapper[4995]: I0126 23:10:37.448363 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 23:10:37 crc kubenswrapper[4995]: W0126 23:10:37.460230 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda88ce357_60cd_42cf_9482_f256204a2d72.slice/crio-0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e WatchSource:0}: Error finding container 0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e: Status 404 returned error can't find the container with id 0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.007070 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerStarted","Data":"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf"} Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.010204 4995 generic.go:334] "Generic (PLEG): container finished" podID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" exitCode=0 Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.010324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0"} Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.015060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerStarted","Data":"cb8107746b4601bf6993601dcd300e1d12614297b3e959491c279ed96b11e4c8"} Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.015161 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerStarted","Data":"0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e"} Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.017903 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-8z855" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.020222 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-q6mtp" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.020270 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6wf22" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" Jan 26 23:10:38 crc kubenswrapper[4995]: E0126 23:10:38.026018 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-px4t9" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" Jan 26 23:10:38 crc kubenswrapper[4995]: I0126 23:10:38.146924 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.146905848 podStartE2EDuration="2.146905848s" podCreationTimestamp="2026-01-26 23:10:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:10:38.142479221 +0000 UTC m=+142.307186686" watchObservedRunningTime="2026-01-26 23:10:38.146905848 +0000 UTC m=+142.311613313" Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.022970 4995 generic.go:334] "Generic (PLEG): container finished" podID="a88ce357-60cd-42cf-9482-f256204a2d72" containerID="cb8107746b4601bf6993601dcd300e1d12614297b3e959491c279ed96b11e4c8" exitCode=0 Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.023133 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerDied","Data":"cb8107746b4601bf6993601dcd300e1d12614297b3e959491c279ed96b11e4c8"} Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.029784 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerStarted","Data":"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027"} Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.032091 4995 generic.go:334] "Generic (PLEG): container finished" podID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" exitCode=0 Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.032162 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf"} Jan 26 23:10:39 crc kubenswrapper[4995]: I0126 23:10:39.075078 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9wv6w" podStartSLOduration=3.282365653 podStartE2EDuration="42.075060013s" podCreationTimestamp="2026-01-26 23:09:57 +0000 UTC" firstStartedPulling="2026-01-26 23:09:59.64243597 +0000 UTC m=+103.807143435" lastFinishedPulling="2026-01-26 23:10:38.43513033 +0000 UTC m=+142.599837795" observedRunningTime="2026-01-26 23:10:39.071877159 +0000 UTC m=+143.236584644" watchObservedRunningTime="2026-01-26 23:10:39.075060013 +0000 UTC m=+143.239767478" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.038359 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerStarted","Data":"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb"} Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.278718 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.295365 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tkghc" podStartSLOduration=4.477526972 podStartE2EDuration="42.295345824s" podCreationTimestamp="2026-01-26 23:09:58 +0000 UTC" firstStartedPulling="2026-01-26 23:10:01.709194683 +0000 UTC m=+105.873902148" lastFinishedPulling="2026-01-26 23:10:39.527013525 +0000 UTC m=+143.691721000" observedRunningTime="2026-01-26 23:10:40.058649845 +0000 UTC m=+144.223357310" watchObservedRunningTime="2026-01-26 23:10:40.295345824 +0000 UTC m=+144.460053289" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.458780 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") pod \"a88ce357-60cd-42cf-9482-f256204a2d72\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.458877 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a88ce357-60cd-42cf-9482-f256204a2d72" (UID: "a88ce357-60cd-42cf-9482-f256204a2d72"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.458896 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") pod \"a88ce357-60cd-42cf-9482-f256204a2d72\" (UID: \"a88ce357-60cd-42cf-9482-f256204a2d72\") " Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.459440 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a88ce357-60cd-42cf-9482-f256204a2d72-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.464495 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a88ce357-60cd-42cf-9482-f256204a2d72" (UID: "a88ce357-60cd-42cf-9482-f256204a2d72"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.560344 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a88ce357-60cd-42cf-9482-f256204a2d72-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.893278 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.893626 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.996653 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 23:10:40 crc kubenswrapper[4995]: E0126 23:10:40.996966 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a88ce357-60cd-42cf-9482-f256204a2d72" containerName="pruner" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.996983 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a88ce357-60cd-42cf-9482-f256204a2d72" containerName="pruner" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.997180 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a88ce357-60cd-42cf-9482-f256204a2d72" containerName="pruner" Jan 26 23:10:40 crc kubenswrapper[4995]: I0126 23:10:40.997718 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.006028 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.043969 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"a88ce357-60cd-42cf-9482-f256204a2d72","Type":"ContainerDied","Data":"0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e"} Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.044004 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef496ef86ec50614443ca5da0ec4f600dc4e31183661f290d78b921912bd52e" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.044049 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.066837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.066972 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.067004 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168129 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168201 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168264 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168279 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.168369 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.186079 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.317866 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:10:41 crc kubenswrapper[4995]: I0126 23:10:41.522520 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 23:10:41 crc kubenswrapper[4995]: W0126 23:10:41.530661 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbd391cd1_35c1_4ee8_98a3_80c0d9cec0e9.slice/crio-357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79 WatchSource:0}: Error finding container 357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79: Status 404 returned error can't find the container with id 357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79 Jan 26 23:10:42 crc kubenswrapper[4995]: I0126 23:10:42.050872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerStarted","Data":"0df4d5b5f690365e5d4a48931cdee454a300ba6752a514b09c733175475487c8"} Jan 26 23:10:42 crc kubenswrapper[4995]: I0126 23:10:42.050926 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerStarted","Data":"357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79"} Jan 26 23:10:42 crc kubenswrapper[4995]: I0126 23:10:42.066161 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.066137723 podStartE2EDuration="2.066137723s" podCreationTimestamp="2026-01-26 23:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:10:42.063930405 +0000 UTC m=+146.228637870" watchObservedRunningTime="2026-01-26 23:10:42.066137723 +0000 UTC m=+146.230845188" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610227 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610762 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610824 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.610883 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.612001 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.614761 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.614885 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.621769 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.622773 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.629095 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.635626 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.638244 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.742767 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.757207 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:44 crc kubenswrapper[4995]: I0126 23:10:44.771546 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 23:10:45 crc kubenswrapper[4995]: I0126 23:10:45.068559 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9f5c6275c4a873be12c03f32d919186152fdddb54566874aca136d669b44858d"} Jan 26 23:10:45 crc kubenswrapper[4995]: W0126 23:10:45.298259 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57 WatchSource:0}: Error finding container 916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57: Status 404 returned error can't find the container with id 916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57 Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.078307 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c1d526f80651045d33440f9de46e37fea79d0f7d99966fa5859efbe73f04584e"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.078675 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"916404635fe564389b34611a150390bc7fba03932f77e2a4b46e9543fcf67f57"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.081668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"44e18be84c50228e1c4ef781aaf7488155aa444f8d111121ce48b2a0ad30dcbe"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.081714 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"898fd627d3440adbd2f5ba505854131403b4f178136a6812b1b76d0f86eb41f7"} Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.082305 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:10:46 crc kubenswrapper[4995]: I0126 23:10:46.084673 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2815508e1c4e25728ff5a6ec781eca97c15b0fad259431fdf47b7efb25ac98f4"} Jan 26 23:10:48 crc kubenswrapper[4995]: I0126 23:10:48.190169 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:48 crc kubenswrapper[4995]: I0126 23:10:48.190500 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:48 crc kubenswrapper[4995]: I0126 23:10:48.327038 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.135451 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.177430 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.205213 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.205278 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:49 crc kubenswrapper[4995]: I0126 23:10:49.244528 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:50 crc kubenswrapper[4995]: I0126 23:10:50.152292 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.114528 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerStarted","Data":"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706"} Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.114917 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9wv6w" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" containerID="cri-o://d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" gracePeriod=2 Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.359910 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.612455 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.694700 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") pod \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.694764 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") pod \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.694820 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") pod \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\" (UID: \"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c\") " Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.695568 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities" (OuterVolumeSpecName: "utilities") pod "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" (UID: "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.703360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7" (OuterVolumeSpecName: "kube-api-access-jn7v7") pod "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" (UID: "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c"). InnerVolumeSpecName "kube-api-access-jn7v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.717049 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" (UID: "387c9fb6-21cf-40c7-b6c9-0f8f50359d0c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.796030 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.796059 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jn7v7\" (UniqueName: \"kubernetes.io/projected/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-kube-api-access-jn7v7\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:51 crc kubenswrapper[4995]: I0126 23:10:51.796070 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.121414 4995 generic.go:334] "Generic (PLEG): container finished" podID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.121472 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.123870 4995 generic.go:334] "Generic (PLEG): container finished" podID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.123934 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129122 4995 generic.go:334] "Generic (PLEG): container finished" podID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129166 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129178 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9wv6w" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129225 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9wv6w" event={"ID":"387c9fb6-21cf-40c7-b6c9-0f8f50359d0c","Type":"ContainerDied","Data":"a0635a7bf961355bc048d1c04e92285d7c8c240f172e625a758ab7fa01b816d1"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.129249 4995 scope.go:117] "RemoveContainer" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.130861 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" exitCode=0 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.131130 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tkghc" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" containerID="cri-o://81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" gracePeriod=2 Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.131372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda"} Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.174560 4995 scope.go:117] "RemoveContainer" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.199717 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.201925 4995 scope.go:117] "RemoveContainer" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.201930 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9wv6w"] Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.316255 4995 scope.go:117] "RemoveContainer" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" Jan 26 23:10:52 crc kubenswrapper[4995]: E0126 23:10:52.316679 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027\": container with ID starting with d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027 not found: ID does not exist" containerID="d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.316729 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027"} err="failed to get container status \"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027\": rpc error: code = NotFound desc = could not find container \"d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027\": container with ID starting with d146908ae9e4602b3f340432e1dfc660f57d7bbd0cf7b5fe6bfa7198df702027 not found: ID does not exist" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.316778 4995 scope.go:117] "RemoveContainer" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" Jan 26 23:10:52 crc kubenswrapper[4995]: E0126 23:10:52.317263 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0\": container with ID starting with da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0 not found: ID does not exist" containerID="da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.317293 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0"} err="failed to get container status \"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0\": rpc error: code = NotFound desc = could not find container \"da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0\": container with ID starting with da00e0ab9877bbdb6a3e4759ff5d2b99438b7ec828e588b96a8267c892ac09d0 not found: ID does not exist" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.317313 4995 scope.go:117] "RemoveContainer" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" Jan 26 23:10:52 crc kubenswrapper[4995]: E0126 23:10:52.317670 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55\": container with ID starting with e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55 not found: ID does not exist" containerID="e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.317688 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55"} err="failed to get container status \"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55\": rpc error: code = NotFound desc = could not find container \"e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55\": container with ID starting with e2901b3aac0e9d9fbadbe3f81a8a3303750520bd09a0718420cd2575d6fc4a55 not found: ID does not exist" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.480502 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.505240 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") pod \"2cf84b12-2476-4bdf-92f2-016c722f74b5\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.505311 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") pod \"2cf84b12-2476-4bdf-92f2-016c722f74b5\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.505339 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") pod \"2cf84b12-2476-4bdf-92f2-016c722f74b5\" (UID: \"2cf84b12-2476-4bdf-92f2-016c722f74b5\") " Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.506401 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities" (OuterVolumeSpecName: "utilities") pod "2cf84b12-2476-4bdf-92f2-016c722f74b5" (UID: "2cf84b12-2476-4bdf-92f2-016c722f74b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.509143 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf" (OuterVolumeSpecName: "kube-api-access-5h6bf") pod "2cf84b12-2476-4bdf-92f2-016c722f74b5" (UID: "2cf84b12-2476-4bdf-92f2-016c722f74b5"). InnerVolumeSpecName "kube-api-access-5h6bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.526744 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" path="/var/lib/kubelet/pods/387c9fb6-21cf-40c7-b6c9-0f8f50359d0c/volumes" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.607338 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h6bf\" (UniqueName: \"kubernetes.io/projected/2cf84b12-2476-4bdf-92f2-016c722f74b5-kube-api-access-5h6bf\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.607390 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.644073 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cf84b12-2476-4bdf-92f2-016c722f74b5" (UID: "2cf84b12-2476-4bdf-92f2-016c722f74b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:10:52 crc kubenswrapper[4995]: I0126 23:10:52.708652 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cf84b12-2476-4bdf-92f2-016c722f74b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.143430 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerStarted","Data":"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.148369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerStarted","Data":"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151369 4995 generic.go:334] "Generic (PLEG): container finished" podID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" exitCode=0 Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151402 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tkghc" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151435 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151458 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tkghc" event={"ID":"2cf84b12-2476-4bdf-92f2-016c722f74b5","Type":"ContainerDied","Data":"d97600484aa0b6e5a49fbd12d065990500a065d91605ffe0a38d4313a4ca5f29"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.151475 4995 scope.go:117] "RemoveContainer" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.153544 4995 generic.go:334] "Generic (PLEG): container finished" podID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" exitCode=0 Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.153585 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.160183 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerStarted","Data":"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40"} Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.168241 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-px4t9" podStartSLOduration=2.238175267 podStartE2EDuration="56.168224253s" podCreationTimestamp="2026-01-26 23:09:57 +0000 UTC" firstStartedPulling="2026-01-26 23:09:58.63095429 +0000 UTC m=+102.795661755" lastFinishedPulling="2026-01-26 23:10:52.561003276 +0000 UTC m=+156.725710741" observedRunningTime="2026-01-26 23:10:53.163731075 +0000 UTC m=+157.328438540" watchObservedRunningTime="2026-01-26 23:10:53.168224253 +0000 UTC m=+157.332931718" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.184285 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8z855" podStartSLOduration=2.087719936 podStartE2EDuration="58.184272258s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:56.543389591 +0000 UTC m=+100.708097056" lastFinishedPulling="2026-01-26 23:10:52.639941913 +0000 UTC m=+156.804649378" observedRunningTime="2026-01-26 23:10:53.182329586 +0000 UTC m=+157.347037051" watchObservedRunningTime="2026-01-26 23:10:53.184272258 +0000 UTC m=+157.348979723" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.186899 4995 scope.go:117] "RemoveContainer" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.220807 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wq2hm" podStartSLOduration=3.205879869 podStartE2EDuration="55.220790864s" podCreationTimestamp="2026-01-26 23:09:58 +0000 UTC" firstStartedPulling="2026-01-26 23:10:00.657663581 +0000 UTC m=+104.822371046" lastFinishedPulling="2026-01-26 23:10:52.672574576 +0000 UTC m=+156.837282041" observedRunningTime="2026-01-26 23:10:53.21799555 +0000 UTC m=+157.382703015" watchObservedRunningTime="2026-01-26 23:10:53.220790864 +0000 UTC m=+157.385498329" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.232546 4995 scope.go:117] "RemoveContainer" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.252230 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.255249 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tkghc"] Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.275983 4995 scope.go:117] "RemoveContainer" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" Jan 26 23:10:53 crc kubenswrapper[4995]: E0126 23:10:53.277496 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb\": container with ID starting with 81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb not found: ID does not exist" containerID="81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277540 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb"} err="failed to get container status \"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb\": rpc error: code = NotFound desc = could not find container \"81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb\": container with ID starting with 81fa053ce5d49bf91f4b0a65eb23ed68571047e20891316f46e9331b6a39acfb not found: ID does not exist" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277574 4995 scope.go:117] "RemoveContainer" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" Jan 26 23:10:53 crc kubenswrapper[4995]: E0126 23:10:53.277818 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf\": container with ID starting with 8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf not found: ID does not exist" containerID="8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277847 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf"} err="failed to get container status \"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf\": rpc error: code = NotFound desc = could not find container \"8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf\": container with ID starting with 8cca32639b99a03e156ba6c975db418ce784423fbf969f75c0e0cb1b15e666bf not found: ID does not exist" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.277866 4995 scope.go:117] "RemoveContainer" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" Jan 26 23:10:53 crc kubenswrapper[4995]: E0126 23:10:53.278164 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462\": container with ID starting with 9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462 not found: ID does not exist" containerID="9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462" Jan 26 23:10:53 crc kubenswrapper[4995]: I0126 23:10:53.278208 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462"} err="failed to get container status \"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462\": rpc error: code = NotFound desc = could not find container \"9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462\": container with ID starting with 9fb0326d6729cafc8524d214c89c9aea5555b3a33e3d814defb5a6d4cfe07462 not found: ID does not exist" Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.168629 4995 generic.go:334] "Generic (PLEG): container finished" podID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" exitCode=0 Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.168713 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359"} Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.173253 4995 generic.go:334] "Generic (PLEG): container finished" podID="58513b5e-460e-4344-91e3-1d20e26fd533" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" exitCode=0 Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.173324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b"} Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.176978 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerStarted","Data":"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899"} Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.226857 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7ptdh" podStartSLOduration=3.168975587 podStartE2EDuration="59.226837029s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:57.575841731 +0000 UTC m=+101.740549196" lastFinishedPulling="2026-01-26 23:10:53.633703173 +0000 UTC m=+157.798410638" observedRunningTime="2026-01-26 23:10:54.22458754 +0000 UTC m=+158.389295015" watchObservedRunningTime="2026-01-26 23:10:54.226837029 +0000 UTC m=+158.391544494" Jan 26 23:10:54 crc kubenswrapper[4995]: I0126 23:10:54.524567 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" path="/var/lib/kubelet/pods/2cf84b12-2476-4bdf-92f2-016c722f74b5/volumes" Jan 26 23:10:55 crc kubenswrapper[4995]: I0126 23:10:55.773053 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:10:55 crc kubenswrapper[4995]: I0126 23:10:55.773095 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:10:55 crc kubenswrapper[4995]: I0126 23:10:55.814027 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:10:56 crc kubenswrapper[4995]: I0126 23:10:56.182016 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:56 crc kubenswrapper[4995]: I0126 23:10:56.182077 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:56 crc kubenswrapper[4995]: I0126 23:10:56.235267 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:57 crc kubenswrapper[4995]: I0126 23:10:57.792951 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:57 crc kubenswrapper[4995]: I0126 23:10:57.793276 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:57 crc kubenswrapper[4995]: I0126 23:10:57.846999 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.236315 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.254909 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.791756 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.792085 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:58 crc kubenswrapper[4995]: I0126 23:10:58.846118 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.202273 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerStarted","Data":"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02"} Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.206163 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerStarted","Data":"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a"} Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.225319 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q6mtp" podStartSLOduration=2.788459708 podStartE2EDuration="1m4.225301006s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:56.542868718 +0000 UTC m=+100.707576183" lastFinishedPulling="2026-01-26 23:10:57.979710016 +0000 UTC m=+162.144417481" observedRunningTime="2026-01-26 23:10:59.224323021 +0000 UTC m=+163.389030516" watchObservedRunningTime="2026-01-26 23:10:59.225301006 +0000 UTC m=+163.390008481" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.249238 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6wf22" podStartSLOduration=1.9217418670000002 podStartE2EDuration="1m4.249204649s" podCreationTimestamp="2026-01-26 23:09:55 +0000 UTC" firstStartedPulling="2026-01-26 23:09:56.548232889 +0000 UTC m=+100.712940354" lastFinishedPulling="2026-01-26 23:10:58.875695671 +0000 UTC m=+163.040403136" observedRunningTime="2026-01-26 23:10:59.243268342 +0000 UTC m=+163.407975807" watchObservedRunningTime="2026-01-26 23:10:59.249204649 +0000 UTC m=+163.413912124" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.273661 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:10:59 crc kubenswrapper[4995]: I0126 23:10:59.762355 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:11:00 crc kubenswrapper[4995]: I0126 23:11:00.211226 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7ptdh" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" containerID="cri-o://93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" gracePeriod=2 Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.143351 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220466 4995 generic.go:334] "Generic (PLEG): container finished" podID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" exitCode=0 Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220504 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899"} Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220535 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7ptdh" event={"ID":"869a6dc6-8120-4a1c-b424-1a06738aa55e","Type":"ContainerDied","Data":"20edf153be996b3cf630c557f436ea3736b0f71a5fce8a127880088910f8cf24"} Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220554 4995 scope.go:117] "RemoveContainer" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.220587 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7ptdh" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.236574 4995 scope.go:117] "RemoveContainer" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.257433 4995 scope.go:117] "RemoveContainer" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.261037 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") pod \"869a6dc6-8120-4a1c-b424-1a06738aa55e\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.261184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") pod \"869a6dc6-8120-4a1c-b424-1a06738aa55e\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.261270 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") pod \"869a6dc6-8120-4a1c-b424-1a06738aa55e\" (UID: \"869a6dc6-8120-4a1c-b424-1a06738aa55e\") " Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.262133 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities" (OuterVolumeSpecName: "utilities") pod "869a6dc6-8120-4a1c-b424-1a06738aa55e" (UID: "869a6dc6-8120-4a1c-b424-1a06738aa55e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.267325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx" (OuterVolumeSpecName: "kube-api-access-p76nx") pod "869a6dc6-8120-4a1c-b424-1a06738aa55e" (UID: "869a6dc6-8120-4a1c-b424-1a06738aa55e"). InnerVolumeSpecName "kube-api-access-p76nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.270991 4995 scope.go:117] "RemoveContainer" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" Jan 26 23:11:01 crc kubenswrapper[4995]: E0126 23:11:01.271931 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899\": container with ID starting with 93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899 not found: ID does not exist" containerID="93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.271964 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899"} err="failed to get container status \"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899\": rpc error: code = NotFound desc = could not find container \"93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899\": container with ID starting with 93b9d27a2e13e73ac5437df9064bdd103b447f4f7557b9833354d7bfbb0c9899 not found: ID does not exist" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.271984 4995 scope.go:117] "RemoveContainer" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" Jan 26 23:11:01 crc kubenswrapper[4995]: E0126 23:11:01.272459 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576\": container with ID starting with 277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576 not found: ID does not exist" containerID="277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.272476 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576"} err="failed to get container status \"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576\": rpc error: code = NotFound desc = could not find container \"277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576\": container with ID starting with 277b7dd5dab1a5a993cc558ce8f99f0145820e17ee95a793b801889a9c7cd576 not found: ID does not exist" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.272489 4995 scope.go:117] "RemoveContainer" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" Jan 26 23:11:01 crc kubenswrapper[4995]: E0126 23:11:01.272964 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f\": container with ID starting with 86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f not found: ID does not exist" containerID="86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.272980 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f"} err="failed to get container status \"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f\": rpc error: code = NotFound desc = could not find container \"86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f\": container with ID starting with 86fda1d47328083695c772777f762b59a60f455a7248563df6ee57c53397ec6f not found: ID does not exist" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.310056 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "869a6dc6-8120-4a1c-b424-1a06738aa55e" (UID: "869a6dc6-8120-4a1c-b424-1a06738aa55e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.362723 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p76nx\" (UniqueName: \"kubernetes.io/projected/869a6dc6-8120-4a1c-b424-1a06738aa55e-kube-api-access-p76nx\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.362765 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.362778 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869a6dc6-8120-4a1c-b424-1a06738aa55e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.562056 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:11:01 crc kubenswrapper[4995]: I0126 23:11:01.572176 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7ptdh"] Jan 26 23:11:02 crc kubenswrapper[4995]: I0126 23:11:02.526647 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" path="/var/lib/kubelet/pods/869a6dc6-8120-4a1c-b424-1a06738aa55e/volumes" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.576093 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.576556 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.665802 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:05 crc kubenswrapper[4995]: I0126 23:11:05.842139 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.019324 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.019607 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.052658 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.312420 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.321411 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:11:06 crc kubenswrapper[4995]: I0126 23:11:06.418557 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d"] Jan 26 23:11:08 crc kubenswrapper[4995]: I0126 23:11:08.769912 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:11:08 crc kubenswrapper[4995]: I0126 23:11:08.770313 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q6mtp" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" containerID="cri-o://bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" gracePeriod=2 Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.163120 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.262591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") pod \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.262636 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") pod \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.262662 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") pod \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\" (UID: \"6aacdfb4-d893-49a9-ae77-a150f1c0a430\") " Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.263782 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities" (OuterVolumeSpecName: "utilities") pod "6aacdfb4-d893-49a9-ae77-a150f1c0a430" (UID: "6aacdfb4-d893-49a9-ae77-a150f1c0a430"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.271337 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q" (OuterVolumeSpecName: "kube-api-access-dxr7q") pod "6aacdfb4-d893-49a9-ae77-a150f1c0a430" (UID: "6aacdfb4-d893-49a9-ae77-a150f1c0a430"). InnerVolumeSpecName "kube-api-access-dxr7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277504 4995 generic.go:334] "Generic (PLEG): container finished" podID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" exitCode=0 Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277536 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02"} Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277557 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q6mtp" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277591 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q6mtp" event={"ID":"6aacdfb4-d893-49a9-ae77-a150f1c0a430","Type":"ContainerDied","Data":"20ff719d1a611af55cb9cea51a19289e5f98717222c932e73ed4f4672c8a5fcb"} Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.277612 4995 scope.go:117] "RemoveContainer" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.292688 4995 scope.go:117] "RemoveContainer" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.305763 4995 scope.go:117] "RemoveContainer" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.321442 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6aacdfb4-d893-49a9-ae77-a150f1c0a430" (UID: "6aacdfb4-d893-49a9-ae77-a150f1c0a430"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.324060 4995 scope.go:117] "RemoveContainer" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" Jan 26 23:11:09 crc kubenswrapper[4995]: E0126 23:11:09.324777 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02\": container with ID starting with bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02 not found: ID does not exist" containerID="bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.324821 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02"} err="failed to get container status \"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02\": rpc error: code = NotFound desc = could not find container \"bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02\": container with ID starting with bd0cdf40c8d40727ec6600b82e3dfac6985283a516ed98df126b330d2e7d6d02 not found: ID does not exist" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.324849 4995 scope.go:117] "RemoveContainer" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" Jan 26 23:11:09 crc kubenswrapper[4995]: E0126 23:11:09.325176 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359\": container with ID starting with 67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359 not found: ID does not exist" containerID="67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.325215 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359"} err="failed to get container status \"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359\": rpc error: code = NotFound desc = could not find container \"67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359\": container with ID starting with 67a74431f4f6addadaf7df59e689cfe53069ae8d36b2383f12d3bf3e3da9f359 not found: ID does not exist" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.325237 4995 scope.go:117] "RemoveContainer" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" Jan 26 23:11:09 crc kubenswrapper[4995]: E0126 23:11:09.325531 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9\": container with ID starting with ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9 not found: ID does not exist" containerID="ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.325557 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9"} err="failed to get container status \"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9\": rpc error: code = NotFound desc = could not find container \"ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9\": container with ID starting with ba3b138159cebe2c3db048ac0b45a1b76c8719a920362baa097441228115e3f9 not found: ID does not exist" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.363891 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxr7q\" (UniqueName: \"kubernetes.io/projected/6aacdfb4-d893-49a9-ae77-a150f1c0a430-kube-api-access-dxr7q\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.363940 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.363954 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6aacdfb4-d893-49a9-ae77-a150f1c0a430-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.601865 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:11:09 crc kubenswrapper[4995]: I0126 23:11:09.604913 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q6mtp"] Jan 26 23:11:10 crc kubenswrapper[4995]: I0126 23:11:10.529083 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" path="/var/lib/kubelet/pods/6aacdfb4-d893-49a9-ae77-a150f1c0a430/volumes" Jan 26 23:11:10 crc kubenswrapper[4995]: I0126 23:11:10.894130 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:11:10 crc kubenswrapper[4995]: I0126 23:11:10.894202 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.501221 4995 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502391 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502422 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502456 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502474 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502503 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502520 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502539 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502558 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502580 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502596 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502624 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502638 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502661 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502675 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502692 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502760 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502785 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502798 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502819 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502831 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502846 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502858 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="extract-content" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.502875 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.502888 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="extract-utilities" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503067 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aacdfb4-d893-49a9-ae77-a150f1c0a430" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503094 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="869a6dc6-8120-4a1c-b424-1a06738aa55e" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503149 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="387c9fb6-21cf-40c7-b6c9-0f8f50359d0c" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503166 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf84b12-2476-4bdf-92f2-016c722f74b5" containerName="registry-server" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.503746 4995 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504004 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504020 4995 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504349 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504404 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504574 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504588 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504668 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" gracePeriod=15 Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504887 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504905 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504921 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504930 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504943 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504951 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504967 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504975 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.504985 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.504992 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.505001 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505008 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.505022 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505029 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.505038 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505045 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505186 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505199 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505210 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505218 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505228 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505238 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.505433 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.508930 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.538063 4995 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685518 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685594 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685671 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685684 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685703 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.685718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786899 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786945 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786974 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.786990 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787026 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787054 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787086 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787080 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787146 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787148 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787183 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787083 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787158 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.787084 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: I0126 23:11:19.839018 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:19 crc kubenswrapper[4995]: W0126 23:11:19.864248 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9 WatchSource:0}: Error finding container 3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9: Status 404 returned error can't find the container with id 3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9 Jan 26 23:11:19 crc kubenswrapper[4995]: E0126 23:11:19.867064 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e6ac0ca7c71bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,LastTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.335986 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.338405 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339314 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339347 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339361 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339373 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" exitCode=2 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.339438 4995 scope.go:117] "RemoveContainer" containerID="dc132adc8b716af7721e4a707753e43fe6c1d62a2c9d73ad9c471defd117aac7" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.342830 4995 generic.go:334] "Generic (PLEG): container finished" podID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerID="0df4d5b5f690365e5d4a48931cdee454a300ba6752a514b09c733175475487c8" exitCode=0 Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.343005 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerDied","Data":"0df4d5b5f690365e5d4a48931cdee454a300ba6752a514b09c733175475487c8"} Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.344008 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.344797 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae"} Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.344828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3e9bb697cca6d93eeafdec7d9cdfad94d568f2445b630d91c9de8c91272c73a9"} Jan 26 23:11:20 crc kubenswrapper[4995]: I0126 23:11:20.345587 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:20 crc kubenswrapper[4995]: E0126 23:11:20.349222 4995 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.357023 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.647814 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.649337 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712321 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") pod \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") pod \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712626 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") pod \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\" (UID: \"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9\") " Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712458 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock" (OuterVolumeSpecName: "var-lock") pod "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" (UID: "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.712868 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" (UID: "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.718615 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" (UID: "bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.813846 4995 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.813880 4995 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:21 crc kubenswrapper[4995]: I0126 23:11:21.813891 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.304366 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.305167 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.306328 4995 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.306883 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.371405 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.372358 4995 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" exitCode=0 Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.372425 4995 scope.go:117] "RemoveContainer" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.372497 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.374454 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9","Type":"ContainerDied","Data":"357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79"} Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.374474 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="357a888df4ab9fde8c5f839f2b79ccca8d003726e12de5c60c7632053574ba79" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.374513 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.388310 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.388463 4995 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.397980 4995 scope.go:117] "RemoveContainer" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.423883 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.423977 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424007 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424017 4995 scope.go:117] "RemoveContainer" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424081 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424300 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424313 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.424554 4995 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.450734 4995 scope.go:117] "RemoveContainer" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.470988 4995 scope.go:117] "RemoveContainer" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.491356 4995 scope.go:117] "RemoveContainer" containerID="701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.515293 4995 scope.go:117] "RemoveContainer" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.515949 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\": container with ID starting with 1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4 not found: ID does not exist" containerID="1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516014 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4"} err="failed to get container status \"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\": rpc error: code = NotFound desc = could not find container \"1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4\": container with ID starting with 1ca74d9609da1f88be3e6c7e7d5794391d8aef93de04e2933169f9c324ef3db4 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516042 4995 scope.go:117] "RemoveContainer" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.516816 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\": container with ID starting with 1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561 not found: ID does not exist" containerID="1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516836 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561"} err="failed to get container status \"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\": rpc error: code = NotFound desc = could not find container \"1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561\": container with ID starting with 1e71f20ee080453eadeba9f9c3c286bfedf89b83e9dc838ffb9915bb03624561 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.516850 4995 scope.go:117] "RemoveContainer" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.517225 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\": container with ID starting with 079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b not found: ID does not exist" containerID="079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517265 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b"} err="failed to get container status \"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\": rpc error: code = NotFound desc = could not find container \"079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b\": container with ID starting with 079aa4553262ba4fcc48431a4f1aae2efcb7d721ecdfd3e49646ba697d634b2b not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517286 4995 scope.go:117] "RemoveContainer" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.517662 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\": container with ID starting with b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6 not found: ID does not exist" containerID="b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517690 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6"} err="failed to get container status \"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\": rpc error: code = NotFound desc = could not find container \"b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6\": container with ID starting with b67ef11c90a6b20b4c326581d65eb1311afc6a0814867070d848e96d6fd8e8d6 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517710 4995 scope.go:117] "RemoveContainer" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.517917 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\": container with ID starting with bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3 not found: ID does not exist" containerID="bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517938 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3"} err="failed to get container status \"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\": rpc error: code = NotFound desc = could not find container \"bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3\": container with ID starting with bb7413a751dd6d5cc3c991be3f415203336562e27ca2003b8585b0124bbbffe3 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.517952 4995 scope.go:117] "RemoveContainer" containerID="701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59" Jan 26 23:11:22 crc kubenswrapper[4995]: E0126 23:11:22.518250 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\": container with ID starting with 701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59 not found: ID does not exist" containerID="701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.518272 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59"} err="failed to get container status \"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\": rpc error: code = NotFound desc = could not find container \"701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59\": container with ID starting with 701db61690897d8795611db6d71fd5a7586ed293eca0f575f75c301f5bd6ae59 not found: ID does not exist" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.523158 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.525177 4995 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.525200 4995 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.675432 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:22 crc kubenswrapper[4995]: I0126 23:11:22.675623 4995 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:24 crc kubenswrapper[4995]: I0126 23:11:24.763062 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 23:11:24 crc kubenswrapper[4995]: I0126 23:11:24.763478 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:24 crc kubenswrapper[4995]: I0126 23:11:24.763841 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.129708 4995 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.164:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e6ac0ca7c71bb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,LastTimestamp:2026-01-26 23:11:19.866601915 +0000 UTC m=+184.031309380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.879638 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.880181 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.880583 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.880870 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.881172 4995 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:25 crc kubenswrapper[4995]: I0126 23:11:25.881210 4995 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 23:11:25 crc kubenswrapper[4995]: E0126 23:11:25.881714 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="200ms" Jan 26 23:11:26 crc kubenswrapper[4995]: E0126 23:11:26.083216 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="400ms" Jan 26 23:11:26 crc kubenswrapper[4995]: E0126 23:11:26.485072 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="800ms" Jan 26 23:11:26 crc kubenswrapper[4995]: I0126 23:11:26.519389 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:26 crc kubenswrapper[4995]: I0126 23:11:26.520074 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:27 crc kubenswrapper[4995]: E0126 23:11:27.286050 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="1.6s" Jan 26 23:11:28 crc kubenswrapper[4995]: E0126 23:11:28.887062 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="3.2s" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.444347 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" containerID="cri-o://47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" gracePeriod=15 Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.807681 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.808677 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.809222 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.809815 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946177 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946245 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946304 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.946373 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947498 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947537 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947562 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947588 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947612 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947586 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947818 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947903 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947938 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.947995 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948018 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948050 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") pod \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\" (UID: \"4d4d9e36-8d49-41a8-a04b-194a5f652f94\") " Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948141 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948428 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948460 4995 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948479 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.948491 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.950170 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.952928 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.953033 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.954078 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.954161 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.954487 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.956902 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.957295 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.959170 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw" (OuterVolumeSpecName: "kube-api-access-pqvmw") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "kube-api-access-pqvmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:11:31 crc kubenswrapper[4995]: I0126 23:11:31.959382 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "4d4d9e36-8d49-41a8-a04b-194a5f652f94" (UID: "4d4d9e36-8d49-41a8-a04b-194a5f652f94"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049862 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049915 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049931 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049946 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049963 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqvmw\" (UniqueName: \"kubernetes.io/projected/4d4d9e36-8d49-41a8-a04b-194a5f652f94-kube-api-access-pqvmw\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049976 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.049989 4995 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4d4d9e36-8d49-41a8-a04b-194a5f652f94-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050001 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050014 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050028 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.050042 4995 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4d4d9e36-8d49-41a8-a04b-194a5f652f94-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 23:11:32 crc kubenswrapper[4995]: E0126 23:11:32.088441 4995 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.164:6443: connect: connection refused" interval="6.4s" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446175 4995 generic.go:334] "Generic (PLEG): container finished" podID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" exitCode=0 Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446257 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerDied","Data":"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc"} Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" event={"ID":"4d4d9e36-8d49-41a8-a04b-194a5f652f94","Type":"ContainerDied","Data":"0689043097d8a067e4df58fd7ad33b4d1504904c89d0939b98d21bff6ddfa350"} Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446340 4995 scope.go:117] "RemoveContainer" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.446382 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.447726 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.448379 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.448669 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.465354 4995 scope.go:117] "RemoveContainer" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.465803 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.466307 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: E0126 23:11:32.466590 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc\": container with ID starting with 47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc not found: ID does not exist" containerID="47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.466634 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc"} err="failed to get container status \"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc\": rpc error: code = NotFound desc = could not find container \"47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc\": container with ID starting with 47560f58728a91812958d11ae517401037fac181a95e33f6661ac3fed36bb3dc not found: ID does not exist" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.466640 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.516836 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.517784 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.518124 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.518331 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.537397 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.537444 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:32 crc kubenswrapper[4995]: E0126 23:11:32.537951 4995 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:32 crc kubenswrapper[4995]: I0126 23:11:32.538484 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.462422 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.462954 4995 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992" exitCode=1 Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.463018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992"} Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.464025 4995 scope.go:117] "RemoveContainer" containerID="cd88a34274cac97d0314b00a6abeee7fc5ca2ef8535bc6c8a23749e5160fe992" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.464463 4995 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.465331 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.466198 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.466553 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468255 4995 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="8a490820995659c15a6b2e4d5b62daa45fdd6e1b431a111f7f7f908b343e4e3a" exitCode=0 Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468310 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"8a490820995659c15a6b2e4d5b62daa45fdd6e1b431a111f7f7f908b343e4e3a"} Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468341 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d2314e2188faeaf32a6cf21127483f220cdebc766690e36d37235d02e8d87366"} Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468762 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.468797 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:33 crc kubenswrapper[4995]: E0126 23:11:33.469207 4995 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.469325 4995 status_manager.go:851] "Failed to get status for pod" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.469705 4995 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.470218 4995 status_manager.go:851] "Failed to get status for pod" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" pod="openshift-network-diagnostics/network-check-target-xd92c" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-xd92c\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:33 crc kubenswrapper[4995]: I0126 23:11:33.470588 4995 status_manager.go:851] "Failed to get status for pod" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" pod="openshift-authentication/oauth-openshift-558db77b4-tzh2d" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-tzh2d\": dial tcp 38.102.83.164:6443: connect: connection refused" Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.475759 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3173b67394f85d47ffea3cd5e2d8b4029e40a41a461ce14859fe410ab00e828a"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.476091 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d699e47f688ad89f79caa3b3fe2034eb852f895f20fe2d9ee960037f30b3ab67"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.476124 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64ff6d259144ecf1b8731927f2c6f62e970ff01c493c170c5de12c617fc05f46"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.476136 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"890902989ae8b982296b97cc98bf5a7ead6595eda9b0820695dbd7ae4fcf527c"} Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.479998 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 23:11:34 crc kubenswrapper[4995]: I0126 23:11:34.480057 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8f6a9084d0f6b2040299d77f9b92e16e0c42ce0333f615cc47331d4182a5a4b9"} Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490371 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"49c79aaa5e427f465e5ab4760f0013c662bac62309e2669c05b123ab10d9765d"} Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490715 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490853 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:35 crc kubenswrapper[4995]: I0126 23:11:35.490877 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:36 crc kubenswrapper[4995]: I0126 23:11:36.641184 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:36 crc kubenswrapper[4995]: I0126 23:11:36.645896 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.502230 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.538961 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.539174 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:37 crc kubenswrapper[4995]: I0126 23:11:37.546601 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.497040 4995 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.550297 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.550330 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.554350 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.556487 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d3f37b1-bc5c-4bcf-8f82-5be5b2c5cb75" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.894253 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.894334 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.894398 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.895344 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:11:40 crc kubenswrapper[4995]: I0126 23:11:40.895485 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c" gracePeriod=600 Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.556704 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c" exitCode=0 Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.556798 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c"} Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.556981 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29"} Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.557228 4995 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:41 crc kubenswrapper[4995]: I0126 23:11:41.557241 4995 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1ceee02e-61ac-4e0c-af1d-39aff19627ac" Jan 26 23:11:46 crc kubenswrapper[4995]: I0126 23:11:46.535289 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="5d3f37b1-bc5c-4bcf-8f82-5be5b2c5cb75" Jan 26 23:11:49 crc kubenswrapper[4995]: I0126 23:11:49.887759 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.046082 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.086288 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.719406 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.788981 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 23:11:50 crc kubenswrapper[4995]: I0126 23:11:50.870295 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.247271 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.302640 4995 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.335701 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.572708 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.592304 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.716925 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.845795 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 23:11:51 crc kubenswrapper[4995]: I0126 23:11:51.872643 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.124013 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.200845 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.266590 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.278372 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.329470 4995 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.341175 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.364380 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.443347 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.541531 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.837517 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.842911 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 23:11:52 crc kubenswrapper[4995]: I0126 23:11:52.875413 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.165583 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.203134 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.247585 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.267382 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.346082 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.390428 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.394382 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.562173 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.562231 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.600133 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.662623 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.673736 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.737757 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.842848 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.847220 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 23:11:53 crc kubenswrapper[4995]: I0126 23:11:53.905361 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.021313 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.021377 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.054350 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.134649 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.203353 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.335487 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.486591 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.550309 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.780981 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.820650 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.830626 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.832296 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.894966 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 23:11:54 crc kubenswrapper[4995]: I0126 23:11:54.949079 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.259698 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.279420 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.337571 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.349629 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.453892 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.466690 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.479628 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.531508 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.588734 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.634215 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.673796 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.722532 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 23:11:55 crc kubenswrapper[4995]: I0126 23:11:55.873595 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.040315 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.050511 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.107843 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.163443 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.275842 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.448143 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.492141 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.521488 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.558136 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.635256 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.699136 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.702163 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.703153 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.784234 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.853356 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 23:11:56 crc kubenswrapper[4995]: I0126 23:11:56.908081 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.113294 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.127857 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.142620 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.204410 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.236837 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.310238 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.443950 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.539232 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 23:11:57 crc kubenswrapper[4995]: I0126 23:11:57.710776 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.022890 4995 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.027115 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-tzh2d","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.027174 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.033391 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.049039 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.049017635 podStartE2EDuration="18.049017635s" podCreationTimestamp="2026-01-26 23:11:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:11:58.045154793 +0000 UTC m=+222.209862258" watchObservedRunningTime="2026-01-26 23:11:58.049017635 +0000 UTC m=+222.213725110" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.123016 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.160428 4995 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.189306 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.204228 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.241509 4995 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.463738 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.487138 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.525716 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" path="/var/lib/kubelet/pods/4d4d9e36-8d49-41a8-a04b-194a5f652f94/volumes" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.526628 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.603504 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.636228 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.686118 4995 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.732019 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.809346 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.944180 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 23:11:58 crc kubenswrapper[4995]: I0126 23:11:58.949212 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.021236 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.047141 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.138050 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.162836 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.214160 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.398825 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.505588 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.528437 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.607040 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.620300 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.637503 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.759935 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.767047 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.809399 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.866542 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.868983 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.912321 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 23:11:59 crc kubenswrapper[4995]: I0126 23:11:59.931122 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.021053 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043468 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-hq72c"] Jan 26 23:12:00 crc kubenswrapper[4995]: E0126 23:12:00.043727 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043745 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" Jan 26 23:12:00 crc kubenswrapper[4995]: E0126 23:12:00.043768 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerName="installer" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043778 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerName="installer" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043926 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd391cd1-35c1-4ee8-98a3-80c0d9cec0e9" containerName="installer" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.043946 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d4d9e36-8d49-41a8-a04b-194a5f652f94" containerName="oauth-openshift" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.044422 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.046566 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.048390 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.048451 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.048518 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.052066 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.052516 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.052533 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.054073 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.055211 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.055569 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.055901 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.056204 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.063151 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-hq72c"] Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.065855 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.069411 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.074760 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.142238 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.176167 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233048 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233167 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233361 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233420 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233467 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233505 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233542 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233618 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233729 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xwrw\" (UniqueName: \"kubernetes.io/projected/8a2cd900-279f-47b1-81d3-19e4c207de72-kube-api-access-6xwrw\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-dir\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233880 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233919 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.233978 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-policies\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.290452 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.319415 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.334920 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335027 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335087 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335181 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335242 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335300 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335349 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335430 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xwrw\" (UniqueName: \"kubernetes.io/projected/8a2cd900-279f-47b1-81d3-19e4c207de72-kube-api-access-6xwrw\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335481 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-dir\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335535 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335593 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335658 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-policies\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335747 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.335798 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.337316 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-dir\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.337933 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.342714 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.343431 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-session\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.343776 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-audit-policies\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.344296 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.345368 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.346406 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.347298 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.347342 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.347959 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.356521 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-login\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.358704 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8a2cd900-279f-47b1-81d3-19e4c207de72-v4-0-config-user-template-error\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.358854 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xwrw\" (UniqueName: \"kubernetes.io/projected/8a2cd900-279f-47b1-81d3-19e4c207de72-kube-api-access-6xwrw\") pod \"oauth-openshift-7b964c775c-hq72c\" (UID: \"8a2cd900-279f-47b1-81d3-19e4c207de72\") " pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.364763 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.421344 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.469387 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.562661 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b964c775c-hq72c"] Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.605265 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.636886 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.665681 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.686517 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" event={"ID":"8a2cd900-279f-47b1-81d3-19e4c207de72","Type":"ContainerStarted","Data":"831552b2f815a751cb8434e8e3e9309051b7d1fa08658e6e65c83a47fd59f0d2"} Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.710705 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.732646 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.736281 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.904340 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 23:12:00 crc kubenswrapper[4995]: I0126 23:12:00.960701 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.010568 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.010685 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.078610 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.091917 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.199527 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.211133 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.227919 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.264213 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.277924 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.294723 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.305734 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.364752 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.372054 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.428541 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.458590 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.486810 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.525244 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.639551 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.652418 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.693973 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" event={"ID":"8a2cd900-279f-47b1-81d3-19e4c207de72","Type":"ContainerStarted","Data":"24ded29b0787650485f0c2dd3b548ee5ac51fdb8310c27bc4a8b08f43bd44930"} Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.694325 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.701762 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.729260 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b964c775c-hq72c" podStartSLOduration=55.72923481 podStartE2EDuration="55.72923481s" podCreationTimestamp="2026-01-26 23:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:01.725360467 +0000 UTC m=+225.890067942" watchObservedRunningTime="2026-01-26 23:12:01.72923481 +0000 UTC m=+225.893942285" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.819955 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.911705 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.966395 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 23:12:01 crc kubenswrapper[4995]: I0126 23:12:01.996947 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.003094 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.046309 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.125272 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.179913 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.313006 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.566511 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.664366 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.692055 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.728376 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.776617 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.779071 4995 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.779402 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" gracePeriod=5 Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.814405 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.850824 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.901326 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.901628 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.925174 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.935227 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 23:12:02 crc kubenswrapper[4995]: I0126 23:12:02.994725 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.080990 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.087593 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.211975 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.316366 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.338142 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.421738 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.457699 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.500322 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.542741 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.556333 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.560862 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.751855 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.850145 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 23:12:03 crc kubenswrapper[4995]: I0126 23:12:03.998588 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.082054 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.087448 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.097183 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.097404 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.116486 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.163271 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.203623 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.218882 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.250243 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.257858 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.337738 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.367638 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.506220 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.561013 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.575328 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.585246 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.602918 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.697018 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.835546 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 23:12:04 crc kubenswrapper[4995]: I0126 23:12:04.895847 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.022860 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.148735 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.196269 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.257008 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.283062 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.420797 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.455876 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.459949 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.496837 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.535908 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.560799 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.707341 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.808774 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 23:12:05 crc kubenswrapper[4995]: I0126 23:12:05.834152 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.084129 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.126386 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.313654 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.351343 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.685262 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 23:12:06 crc kubenswrapper[4995]: I0126 23:12:06.995487 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 23:12:07 crc kubenswrapper[4995]: I0126 23:12:07.155910 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.385391 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.385784 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449595 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449648 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449725 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449799 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449847 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449883 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.449953 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450009 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450090 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450330 4995 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450355 4995 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450367 4995 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.450377 4995 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.456917 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.523851 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.551741 4995 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.735927 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.736025 4995 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" exitCode=137 Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.736161 4995 scope.go:117] "RemoveContainer" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.736186 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.758825 4995 scope.go:117] "RemoveContainer" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" Jan 26 23:12:08 crc kubenswrapper[4995]: E0126 23:12:08.759546 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae\": container with ID starting with cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae not found: ID does not exist" containerID="cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae" Jan 26 23:12:08 crc kubenswrapper[4995]: I0126 23:12:08.759604 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae"} err="failed to get container status \"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae\": rpc error: code = NotFound desc = could not find container \"cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae\": container with ID starting with cf36dbd5295907b5599941b12d59178878687cf5bf1fc83c8be6022d85591aae not found: ID does not exist" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.012975 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.013763 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" containerID="cri-o://f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" gracePeriod=30 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.116572 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.117120 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" containerID="cri-o://6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" gracePeriod=30 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.441670 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607694 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607761 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607803 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.607874 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") pod \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\" (UID: \"7f5c78ad-3088-4100-90ac-f863bb21e4a2\") " Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.608677 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config" (OuterVolumeSpecName: "config") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.609208 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.609539 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.609561 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f5c78ad-3088-4100-90ac-f863bb21e4a2-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.613891 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.614269 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv" (OuterVolumeSpecName: "kube-api-access-dj7jv") pod "7f5c78ad-3088-4100-90ac-f863bb21e4a2" (UID: "7f5c78ad-3088-4100-90ac-f863bb21e4a2"). InnerVolumeSpecName "kube-api-access-dj7jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.710990 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj7jv\" (UniqueName: \"kubernetes.io/projected/7f5c78ad-3088-4100-90ac-f863bb21e4a2-kube-api-access-dj7jv\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.711022 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f5c78ad-3088-4100-90ac-f863bb21e4a2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.820112 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854113 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" exitCode=0 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854452 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854629 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerDied","Data":"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d" event={"ID":"7f5c78ad-3088-4100-90ac-f863bb21e4a2","Type":"ContainerDied","Data":"d37e0cbeaf79e04860a72c99f4fde9e7eba767757f8c7acc0cfe617f3b06e685"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.854846 4995 scope.go:117] "RemoveContainer" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.869993 4995 generic.go:334] "Generic (PLEG): container finished" podID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" exitCode=0 Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.870280 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerDied","Data":"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.870445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" event={"ID":"1fb6bf0f-13dc-4a58-853b-98c00142f0bb","Type":"ContainerDied","Data":"8420e19a90b73cb1baaf3ed3fb083fef494d2cf0339203afd00eae69282ad6ad"} Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.870558 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-zp6fr" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.882759 4995 scope.go:117] "RemoveContainer" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" Jan 26 23:12:28 crc kubenswrapper[4995]: E0126 23:12:28.883483 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4\": container with ID starting with 6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4 not found: ID does not exist" containerID="6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.883722 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4"} err="failed to get container status \"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4\": rpc error: code = NotFound desc = could not find container \"6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4\": container with ID starting with 6dcb3041c6793f1a0c7ddd39359ca540c9354196ad888e81b8dd7b064edf0bc4 not found: ID does not exist" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.883844 4995 scope.go:117] "RemoveContainer" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.897074 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.901793 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qgp7d"] Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.902461 4995 scope.go:117] "RemoveContainer" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" Jan 26 23:12:28 crc kubenswrapper[4995]: E0126 23:12:28.902900 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d\": container with ID starting with f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d not found: ID does not exist" containerID="f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d" Jan 26 23:12:28 crc kubenswrapper[4995]: I0126 23:12:28.902992 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d"} err="failed to get container status \"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d\": rpc error: code = NotFound desc = could not find container \"f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d\": container with ID starting with f0473ccebcb467509282c6c695a2c9aa2e1ea588647baa279f4c38cb2524a91d not found: ID does not exist" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015090 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015151 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015200 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.015294 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") pod \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\" (UID: \"1fb6bf0f-13dc-4a58-853b-98c00142f0bb\") " Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.016343 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.016372 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca" (OuterVolumeSpecName: "client-ca") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.016484 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config" (OuterVolumeSpecName: "config") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.019176 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6" (OuterVolumeSpecName: "kube-api-access-pfdf6") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "kube-api-access-pfdf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.019743 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1fb6bf0f-13dc-4a58-853b-98c00142f0bb" (UID: "1fb6bf0f-13dc-4a58-853b-98c00142f0bb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117069 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117201 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfdf6\" (UniqueName: \"kubernetes.io/projected/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-kube-api-access-pfdf6\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117217 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117228 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.117241 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fb6bf0f-13dc-4a58-853b-98c00142f0bb-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.193846 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.200911 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-zp6fr"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.341984 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28"] Jan 26 23:12:29 crc kubenswrapper[4995]: E0126 23:12:29.342292 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342311 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: E0126 23:12:29.342328 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342335 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: E0126 23:12:29.342347 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342352 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342445 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" containerName="route-controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342459 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" containerName="controller-manager" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342471 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.342834 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.343768 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.344333 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.345247 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.345489 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349802 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349856 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349930 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.349802 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.350271 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.350618 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.352377 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.352904 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.353198 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.353311 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.356050 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.358621 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.358904 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528179 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-config\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528223 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528244 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528263 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlsm5\" (UniqueName: \"kubernetes.io/projected/ac827692-56fd-45a9-8cc5-48a6a6c87eac-kube-api-access-xlsm5\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528299 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528320 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528345 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-client-ca\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528365 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.528391 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac827692-56fd-45a9-8cc5-48a6a6c87eac-serving-cert\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.629818 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlsm5\" (UniqueName: \"kubernetes.io/projected/ac827692-56fd-45a9-8cc5-48a6a6c87eac-kube-api-access-xlsm5\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.629995 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630208 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-client-ca\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630243 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630315 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac827692-56fd-45a9-8cc5-48a6a6c87eac-serving-cert\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630363 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-config\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630406 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.630451 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.632374 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-config\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.632806 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac827692-56fd-45a9-8cc5-48a6a6c87eac-client-ca\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.632997 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.633193 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.633277 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.639342 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac827692-56fd-45a9-8cc5-48a6a6c87eac-serving-cert\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.647084 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.652676 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"controller-manager-79ff58c766-m964x\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.655687 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlsm5\" (UniqueName: \"kubernetes.io/projected/ac827692-56fd-45a9-8cc5-48a6a6c87eac-kube-api-access-xlsm5\") pod \"route-controller-manager-6cf4499c5f-z7t28\" (UID: \"ac827692-56fd-45a9-8cc5-48a6a6c87eac\") " pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.727611 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:29 crc kubenswrapper[4995]: I0126 23:12:29.739346 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.001681 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.158005 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28"] Jan 26 23:12:30 crc kubenswrapper[4995]: W0126 23:12:30.165535 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac827692_56fd_45a9_8cc5_48a6a6c87eac.slice/crio-64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1 WatchSource:0}: Error finding container 64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1: Status 404 returned error can't find the container with id 64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1 Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.524868 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fb6bf0f-13dc-4a58-853b-98c00142f0bb" path="/var/lib/kubelet/pods/1fb6bf0f-13dc-4a58-853b-98c00142f0bb/volumes" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.526051 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5c78ad-3088-4100-90ac-f863bb21e4a2" path="/var/lib/kubelet/pods/7f5c78ad-3088-4100-90ac-f863bb21e4a2/volumes" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.884187 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" event={"ID":"ac827692-56fd-45a9-8cc5-48a6a6c87eac","Type":"ContainerStarted","Data":"0a7508f64534f42cf30fc8d6c7530ab1d9697d46743b2c670f2de3f2d0e1577d"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.884228 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" event={"ID":"ac827692-56fd-45a9-8cc5-48a6a6c87eac","Type":"ContainerStarted","Data":"64d2d597a07afe330e661770e4c6b0726cb20ed72a984a907c8b6951d04df9c1"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.885345 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.886741 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerStarted","Data":"866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.886772 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerStarted","Data":"2928f01136cbe6a2be2b1a77289b1ab6916a7d7784e58242ff500aa4ed967936"} Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.887146 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.893502 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.901736 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.911501 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cf4499c5f-z7t28" podStartSLOduration=2.911482752 podStartE2EDuration="2.911482752s" podCreationTimestamp="2026-01-26 23:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:30.909859872 +0000 UTC m=+255.074567337" watchObservedRunningTime="2026-01-26 23:12:30.911482752 +0000 UTC m=+255.076190227" Jan 26 23:12:30 crc kubenswrapper[4995]: I0126 23:12:30.932142 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" podStartSLOduration=2.932128324 podStartE2EDuration="2.932128324s" podCreationTimestamp="2026-01-26 23:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:30.929917819 +0000 UTC m=+255.094625284" watchObservedRunningTime="2026-01-26 23:12:30.932128324 +0000 UTC m=+255.096835789" Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.448502 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.449248 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" containerID="cri-o://866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213" gracePeriod=30 Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.986983 4995 generic.go:334] "Generic (PLEG): container finished" podID="526f514d-b20f-45b7-b477-198fbb124d43" containerID="866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213" exitCode=0 Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.987073 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerDied","Data":"866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213"} Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.987572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" event={"ID":"526f514d-b20f-45b7-b477-198fbb124d43","Type":"ContainerDied","Data":"2928f01136cbe6a2be2b1a77289b1ab6916a7d7784e58242ff500aa4ed967936"} Jan 26 23:12:48 crc kubenswrapper[4995]: I0126 23:12:48.987588 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2928f01136cbe6a2be2b1a77289b1ab6916a7d7784e58242ff500aa4ed967936" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.015741 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.183964 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.184030 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185036 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca" (OuterVolumeSpecName: "client-ca") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185051 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config" (OuterVolumeSpecName: "config") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185169 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185201 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185223 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") pod \"526f514d-b20f-45b7-b477-198fbb124d43\" (UID: \"526f514d-b20f-45b7-b477-198fbb124d43\") " Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.185694 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.186230 4995 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.186252 4995 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.186260 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/526f514d-b20f-45b7-b477-198fbb124d43-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.190762 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.191297 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf" (OuterVolumeSpecName: "kube-api-access-tm9sf") pod "526f514d-b20f-45b7-b477-198fbb124d43" (UID: "526f514d-b20f-45b7-b477-198fbb124d43"). InnerVolumeSpecName "kube-api-access-tm9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.287396 4995 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/526f514d-b20f-45b7-b477-198fbb124d43-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.287462 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm9sf\" (UniqueName: \"kubernetes.io/projected/526f514d-b20f-45b7-b477-198fbb124d43-kube-api-access-tm9sf\") on node \"crc\" DevicePath \"\"" Jan 26 23:12:49 crc kubenswrapper[4995]: I0126 23:12:49.991745 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79ff58c766-m964x" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.018233 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.021663 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79ff58c766-m964x"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.358749 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ff868d854-x4qdc"] Jan 26 23:12:50 crc kubenswrapper[4995]: E0126 23:12:50.359438 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.359483 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.359693 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="526f514d-b20f-45b7-b477-198fbb124d43" containerName="controller-manager" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.360405 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.362471 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.363682 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.363965 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.364151 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.364743 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.364802 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.373645 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.373832 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff868d854-x4qdc"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501846 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqdqv\" (UniqueName: \"kubernetes.io/projected/b80f458e-de76-46ef-9e85-73791a38b0f7-kube-api-access-zqdqv\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501911 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80f458e-de76-46ef-9e85-73791a38b0f7-serving-cert\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501932 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-client-ca\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501954 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-config\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.501971 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-proxy-ca-bundles\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.524034 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="526f514d-b20f-45b7-b477-198fbb124d43" path="/var/lib/kubelet/pods/526f514d-b20f-45b7-b477-198fbb124d43/volumes" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqdqv\" (UniqueName: \"kubernetes.io/projected/b80f458e-de76-46ef-9e85-73791a38b0f7-kube-api-access-zqdqv\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603605 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80f458e-de76-46ef-9e85-73791a38b0f7-serving-cert\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603633 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-client-ca\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603665 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-config\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.603686 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-proxy-ca-bundles\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.605204 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-proxy-ca-bundles\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.605613 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-client-ca\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.606653 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b80f458e-de76-46ef-9e85-73791a38b0f7-config\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.608620 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b80f458e-de76-46ef-9e85-73791a38b0f7-serving-cert\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.630551 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqdqv\" (UniqueName: \"kubernetes.io/projected/b80f458e-de76-46ef-9e85-73791a38b0f7-kube-api-access-zqdqv\") pod \"controller-manager-5ff868d854-x4qdc\" (UID: \"b80f458e-de76-46ef-9e85-73791a38b0f7\") " pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.679815 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.859861 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff868d854-x4qdc"] Jan 26 23:12:50 crc kubenswrapper[4995]: I0126 23:12:50.997065 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" event={"ID":"b80f458e-de76-46ef-9e85-73791a38b0f7","Type":"ContainerStarted","Data":"dce09e3ab65a11ac1479cd379ccc1aff37b0e6548c52af866d7b6066267832b6"} Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.005404 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" event={"ID":"b80f458e-de76-46ef-9e85-73791a38b0f7","Type":"ContainerStarted","Data":"849492b8200b755cb38c7ef89b4b13fd4cbe5025d5e9aca12e1f15784fa69cc0"} Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.006123 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.010933 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" Jan 26 23:12:52 crc kubenswrapper[4995]: I0126 23:12:52.030962 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ff868d854-x4qdc" podStartSLOduration=4.030940218 podStartE2EDuration="4.030940218s" podCreationTimestamp="2026-01-26 23:12:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:12:52.022134844 +0000 UTC m=+276.186842319" watchObservedRunningTime="2026-01-26 23:12:52.030940218 +0000 UTC m=+276.195647673" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.806431 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8lxhv"] Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.808019 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.822320 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8lxhv"] Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931073 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931150 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40aa93b4-3513-4def-ab82-d438b38e5e92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931172 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-certificates\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931207 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-trusted-ca\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931248 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40aa93b4-3513-4def-ab82-d438b38e5e92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931264 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-tls\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931377 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr8lc\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-kube-api-access-pr8lc\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.931419 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-bound-sa-token\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:13 crc kubenswrapper[4995]: I0126 23:13:13.952716 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032368 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40aa93b4-3513-4def-ab82-d438b38e5e92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032414 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-certificates\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032451 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-trusted-ca\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032482 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40aa93b4-3513-4def-ab82-d438b38e5e92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032501 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-tls\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032525 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr8lc\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-kube-api-access-pr8lc\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.032542 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-bound-sa-token\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.033495 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/40aa93b4-3513-4def-ab82-d438b38e5e92-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.033891 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-certificates\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.035398 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/40aa93b4-3513-4def-ab82-d438b38e5e92-trusted-ca\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.037639 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/40aa93b4-3513-4def-ab82-d438b38e5e92-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.037928 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-registry-tls\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.049251 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-bound-sa-token\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.049910 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr8lc\" (UniqueName: \"kubernetes.io/projected/40aa93b4-3513-4def-ab82-d438b38e5e92-kube-api-access-pr8lc\") pod \"image-registry-66df7c8f76-8lxhv\" (UID: \"40aa93b4-3513-4def-ab82-d438b38e5e92\") " pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.134401 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:14 crc kubenswrapper[4995]: I0126 23:13:14.536611 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8lxhv"] Jan 26 23:13:14 crc kubenswrapper[4995]: W0126 23:13:14.541882 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40aa93b4_3513_4def_ab82_d438b38e5e92.slice/crio-8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672 WatchSource:0}: Error finding container 8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672: Status 404 returned error can't find the container with id 8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672 Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.131945 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" event={"ID":"40aa93b4-3513-4def-ab82-d438b38e5e92","Type":"ContainerStarted","Data":"15e2eb8a7af4db246b80b7bd9e7a0494f5f523a08daa3e093fdc8f1a6582933e"} Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.132324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" event={"ID":"40aa93b4-3513-4def-ab82-d438b38e5e92","Type":"ContainerStarted","Data":"8841c8b62108eabdf794f2003afd586bfc17c428f8f9c2427123e96745b5d672"} Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.133287 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:15 crc kubenswrapper[4995]: I0126 23:13:15.163322 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" podStartSLOduration=2.163300016 podStartE2EDuration="2.163300016s" podCreationTimestamp="2026-01-26 23:13:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:13:15.159086959 +0000 UTC m=+299.323794434" watchObservedRunningTime="2026-01-26 23:13:15.163300016 +0000 UTC m=+299.328007481" Jan 26 23:13:16 crc kubenswrapper[4995]: I0126 23:13:16.317083 4995 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.297910 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.298965 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8z855" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" containerID="cri-o://2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.303974 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.304352 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6wf22" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" containerID="cri-o://8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.321363 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.322137 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" containerID="cri-o://ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.337628 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.338176 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-px4t9" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" containerID="cri-o://049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.347356 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.347639 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wq2hm" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" containerID="cri-o://4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" gracePeriod=30 Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.364188 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vsjb7"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.364952 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.375375 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vsjb7"] Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.448295 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.448843 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfp7\" (UniqueName: \"kubernetes.io/projected/d781053b-fcf3-44a7-812a-8af6c2c1ab07-kube-api-access-zgfp7\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.448886 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.550505 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgfp7\" (UniqueName: \"kubernetes.io/projected/d781053b-fcf3-44a7-812a-8af6c2c1ab07-kube-api-access-zgfp7\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.550565 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.550631 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.552288 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.556669 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d781053b-fcf3-44a7-812a-8af6c2c1ab07-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.569278 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgfp7\" (UniqueName: \"kubernetes.io/projected/d781053b-fcf3-44a7-812a-8af6c2c1ab07-kube-api-access-zgfp7\") pod \"marketplace-operator-79b997595-vsjb7\" (UID: \"d781053b-fcf3-44a7-812a-8af6c2c1ab07\") " pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.748018 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.778060 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.948703 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.955350 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") pod \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.955397 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") pod \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.955425 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") pod \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\" (UID: \"b7295e1f-e3cb-4710-8763-b02b3e9ed67b\") " Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.956291 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities" (OuterVolumeSpecName: "utilities") pod "b7295e1f-e3cb-4710-8763-b02b3e9ed67b" (UID: "b7295e1f-e3cb-4710-8763-b02b3e9ed67b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.960211 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.984634 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m" (OuterVolumeSpecName: "kube-api-access-jbb4m") pod "b7295e1f-e3cb-4710-8763-b02b3e9ed67b" (UID: "b7295e1f-e3cb-4710-8763-b02b3e9ed67b"). InnerVolumeSpecName "kube-api-access-jbb4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:22 crc kubenswrapper[4995]: I0126 23:13:22.998151 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.005310 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.029619 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b7295e1f-e3cb-4710-8763-b02b3e9ed67b" (UID: "b7295e1f-e3cb-4710-8763-b02b3e9ed67b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056785 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") pod \"58513b5e-460e-4344-91e3-1d20e26fd533\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056824 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") pod \"38be674d-6ae2-441d-b361-a9eea3b694a7\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056863 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") pod \"38be674d-6ae2-441d-b361-a9eea3b694a7\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056908 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") pod \"58513b5e-460e-4344-91e3-1d20e26fd533\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056957 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") pod \"58513b5e-460e-4344-91e3-1d20e26fd533\" (UID: \"58513b5e-460e-4344-91e3-1d20e26fd533\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.056981 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") pod \"38be674d-6ae2-441d-b361-a9eea3b694a7\" (UID: \"38be674d-6ae2-441d-b361-a9eea3b694a7\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057222 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057240 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbb4m\" (UniqueName: \"kubernetes.io/projected/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-kube-api-access-jbb4m\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057255 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7295e1f-e3cb-4710-8763-b02b3e9ed67b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057748 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities" (OuterVolumeSpecName: "utilities") pod "58513b5e-460e-4344-91e3-1d20e26fd533" (UID: "58513b5e-460e-4344-91e3-1d20e26fd533"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.057970 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities" (OuterVolumeSpecName: "utilities") pod "38be674d-6ae2-441d-b361-a9eea3b694a7" (UID: "38be674d-6ae2-441d-b361-a9eea3b694a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.060436 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj" (OuterVolumeSpecName: "kube-api-access-xbvbj") pod "58513b5e-460e-4344-91e3-1d20e26fd533" (UID: "58513b5e-460e-4344-91e3-1d20e26fd533"). InnerVolumeSpecName "kube-api-access-xbvbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.061316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz" (OuterVolumeSpecName: "kube-api-access-c27fz") pod "38be674d-6ae2-441d-b361-a9eea3b694a7" (UID: "38be674d-6ae2-441d-b361-a9eea3b694a7"). InnerVolumeSpecName "kube-api-access-c27fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.081006 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38be674d-6ae2-441d-b361-a9eea3b694a7" (UID: "38be674d-6ae2-441d-b361-a9eea3b694a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.117791 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58513b5e-460e-4344-91e3-1d20e26fd533" (UID: "58513b5e-460e-4344-91e3-1d20e26fd533"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.157953 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") pod \"5166d9b5-534e-4426-8085-a1900c7bdafb\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158005 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") pod \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158035 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") pod \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158056 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") pod \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\" (UID: \"3f9a7b30-dccb-4753-81a1-622853d6ba3c\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158075 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") pod \"5166d9b5-534e-4426-8085-a1900c7bdafb\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158110 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") pod \"5166d9b5-534e-4426-8085-a1900c7bdafb\" (UID: \"5166d9b5-534e-4426-8085-a1900c7bdafb\") " Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158319 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158330 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158339 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c27fz\" (UniqueName: \"kubernetes.io/projected/38be674d-6ae2-441d-b361-a9eea3b694a7-kube-api-access-c27fz\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158349 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbvbj\" (UniqueName: \"kubernetes.io/projected/58513b5e-460e-4344-91e3-1d20e26fd533-kube-api-access-xbvbj\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158357 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58513b5e-460e-4344-91e3-1d20e26fd533-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158365 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38be674d-6ae2-441d-b361-a9eea3b694a7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158773 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "3f9a7b30-dccb-4753-81a1-622853d6ba3c" (UID: "3f9a7b30-dccb-4753-81a1-622853d6ba3c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.158831 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities" (OuterVolumeSpecName: "utilities") pod "5166d9b5-534e-4426-8085-a1900c7bdafb" (UID: "5166d9b5-534e-4426-8085-a1900c7bdafb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.161423 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626" (OuterVolumeSpecName: "kube-api-access-df626") pod "5166d9b5-534e-4426-8085-a1900c7bdafb" (UID: "5166d9b5-534e-4426-8085-a1900c7bdafb"). InnerVolumeSpecName "kube-api-access-df626". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.161670 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf" (OuterVolumeSpecName: "kube-api-access-x4fnf") pod "3f9a7b30-dccb-4753-81a1-622853d6ba3c" (UID: "3f9a7b30-dccb-4753-81a1-622853d6ba3c"). InnerVolumeSpecName "kube-api-access-x4fnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.161882 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "3f9a7b30-dccb-4753-81a1-622853d6ba3c" (UID: "3f9a7b30-dccb-4753-81a1-622853d6ba3c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.174943 4995 generic.go:334] "Generic (PLEG): container finished" podID="58513b5e-460e-4344-91e3-1d20e26fd533" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175010 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wf22" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175015 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175144 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wf22" event={"ID":"58513b5e-460e-4344-91e3-1d20e26fd533","Type":"ContainerDied","Data":"f1140a94397286fd3722f80f6c4a1ec3c8895bbf65314d7a81fe9bc35b32d3b7"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.175173 4995 scope.go:117] "RemoveContainer" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180679 4995 generic.go:334] "Generic (PLEG): container finished" podID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180765 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180792 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8z855" event={"ID":"b7295e1f-e3cb-4710-8763-b02b3e9ed67b","Type":"ContainerDied","Data":"a9d19028654a4b4f323d0e8da8ba08742825da3af7b48d707205e793ef542ae5"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.180850 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8z855" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186333 4995 generic.go:334] "Generic (PLEG): container finished" podID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186391 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerDied","Data":"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186417 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" event={"ID":"3f9a7b30-dccb-4753-81a1-622853d6ba3c","Type":"ContainerDied","Data":"f901f601e0243ea0adb58f7b81260269e5e87406c390fbde6045e9147797112d"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.186494 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-phjts" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188146 4995 generic.go:334] "Generic (PLEG): container finished" podID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188226 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-px4t9" event={"ID":"38be674d-6ae2-441d-b361-a9eea3b694a7","Type":"ContainerDied","Data":"2791ea2f560df413a781ffdcf254d63067a2528c47ab19f2d416f080d3de6868"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.188229 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-px4t9" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192741 4995 generic.go:334] "Generic (PLEG): container finished" podID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" exitCode=0 Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192774 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192796 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wq2hm" event={"ID":"5166d9b5-534e-4426-8085-a1900c7bdafb","Type":"ContainerDied","Data":"e6c2cdd4d29af6d09c813a8f167fa421c7aeada38df75885bcbaf2e7ea7b36fd"} Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.192818 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wq2hm" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.206538 4995 scope.go:117] "RemoveContainer" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.209439 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.212206 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6wf22"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.238200 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.253436 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8z855"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.258428 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259527 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259573 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259584 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4fnf\" (UniqueName: \"kubernetes.io/projected/3f9a7b30-dccb-4753-81a1-622853d6ba3c-kube-api-access-x4fnf\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259592 4995 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f9a7b30-dccb-4753-81a1-622853d6ba3c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.259602 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df626\" (UniqueName: \"kubernetes.io/projected/5166d9b5-534e-4426-8085-a1900c7bdafb-kube-api-access-df626\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.265197 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-phjts"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.268322 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.273253 4995 scope.go:117] "RemoveContainer" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.275165 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-px4t9"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.280146 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vsjb7"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.287074 4995 scope.go:117] "RemoveContainer" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.287608 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a\": container with ID starting with 8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a not found: ID does not exist" containerID="8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.287644 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a"} err="failed to get container status \"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a\": rpc error: code = NotFound desc = could not find container \"8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a\": container with ID starting with 8a64df2e50955301eeac6cf356a2c10da5ac2712af8d7e4737ce6ec8e7dea67a not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.287671 4995 scope.go:117] "RemoveContainer" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.288155 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b\": container with ID starting with 51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b not found: ID does not exist" containerID="51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288215 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b"} err="failed to get container status \"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b\": rpc error: code = NotFound desc = could not find container \"51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b\": container with ID starting with 51f2888776be4af9626cc31023cb1aaddf91df04db77fd5616e8ec20fe14751b not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288243 4995 scope.go:117] "RemoveContainer" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.288713 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b\": container with ID starting with 837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b not found: ID does not exist" containerID="837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288744 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b"} err="failed to get container status \"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b\": rpc error: code = NotFound desc = could not find container \"837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b\": container with ID starting with 837ae8eeeaa0d08585b80d222f732c3005c58ebd68a500d87cb8810f8da1a15b not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.288767 4995 scope.go:117] "RemoveContainer" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.303116 4995 scope.go:117] "RemoveContainer" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.315721 4995 scope.go:117] "RemoveContainer" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.319797 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5166d9b5-534e-4426-8085-a1900c7bdafb" (UID: "5166d9b5-534e-4426-8085-a1900c7bdafb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.351714 4995 scope.go:117] "RemoveContainer" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.352037 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40\": container with ID starting with 2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40 not found: ID does not exist" containerID="2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352070 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40"} err="failed to get container status \"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40\": rpc error: code = NotFound desc = could not find container \"2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40\": container with ID starting with 2ab5842effb0985a972d61dca0809adab8838afd2cf8854782018433bbd5ee40 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352092 4995 scope.go:117] "RemoveContainer" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.352442 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda\": container with ID starting with 4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda not found: ID does not exist" containerID="4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352465 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda"} err="failed to get container status \"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda\": rpc error: code = NotFound desc = could not find container \"4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda\": container with ID starting with 4a9f8092621661a13a596fb098af401b06762d2bfa3186942b94e527d2dfeeda not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352479 4995 scope.go:117] "RemoveContainer" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.352677 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292\": container with ID starting with 2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292 not found: ID does not exist" containerID="2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352697 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292"} err="failed to get container status \"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292\": rpc error: code = NotFound desc = could not find container \"2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292\": container with ID starting with 2622118ef9b2734d2dd7caae49aecc003d5a844c314faa499a76e6bd86ae9292 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.352713 4995 scope.go:117] "RemoveContainer" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.360260 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5166d9b5-534e-4426-8085-a1900c7bdafb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.367932 4995 scope.go:117] "RemoveContainer" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.368368 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad\": container with ID starting with ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad not found: ID does not exist" containerID="ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.368394 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad"} err="failed to get container status \"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad\": rpc error: code = NotFound desc = could not find container \"ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad\": container with ID starting with ceaadd0695b29813c0cf9b86d96477fbf66a4b0476b38addf9c0570229d52cad not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.368416 4995 scope.go:117] "RemoveContainer" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.380771 4995 scope.go:117] "RemoveContainer" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.399311 4995 scope.go:117] "RemoveContainer" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.411870 4995 scope.go:117] "RemoveContainer" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.412573 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429\": container with ID starting with 049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429 not found: ID does not exist" containerID="049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.412598 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429"} err="failed to get container status \"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429\": rpc error: code = NotFound desc = could not find container \"049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429\": container with ID starting with 049244c83f7e9d8bdc50cb25bed394d6ea1a079e1f8d11c3880ff9df0f380429 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.412620 4995 scope.go:117] "RemoveContainer" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.413080 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6\": container with ID starting with 0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6 not found: ID does not exist" containerID="0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413134 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6"} err="failed to get container status \"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6\": rpc error: code = NotFound desc = could not find container \"0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6\": container with ID starting with 0c89787352ddbd10f6f6c1561503f8a7efb238d20a0be9dcb8202ec50c5208c6 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413150 4995 scope.go:117] "RemoveContainer" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.413945 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c\": container with ID starting with 9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c not found: ID does not exist" containerID="9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413965 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c"} err="failed to get container status \"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c\": rpc error: code = NotFound desc = could not find container \"9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c\": container with ID starting with 9fac65ee26d2e810c38add5bde063e06382ae7bd0dc96ee51f9d5bb06195a31c not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.413979 4995 scope.go:117] "RemoveContainer" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.476512 4995 scope.go:117] "RemoveContainer" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.492922 4995 scope.go:117] "RemoveContainer" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.506500 4995 scope.go:117] "RemoveContainer" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.506887 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba\": container with ID starting with 4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba not found: ID does not exist" containerID="4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.506927 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba"} err="failed to get container status \"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba\": rpc error: code = NotFound desc = could not find container \"4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba\": container with ID starting with 4833bf47b6fcc31523f34cfbf93376c1bc3bf409c264c243f52d16c94b989eba not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.506962 4995 scope.go:117] "RemoveContainer" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.507664 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706\": container with ID starting with e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706 not found: ID does not exist" containerID="e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.507701 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706"} err="failed to get container status \"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706\": rpc error: code = NotFound desc = could not find container \"e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706\": container with ID starting with e970ea9d45d518da162e2142e6065c587ae4af1b7f3370bc299aada006f16706 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.507726 4995 scope.go:117] "RemoveContainer" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.508242 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92\": container with ID starting with 132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92 not found: ID does not exist" containerID="132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.508275 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92"} err="failed to get container status \"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92\": rpc error: code = NotFound desc = could not find container \"132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92\": container with ID starting with 132dbfb78e34d4116ec32e116c34723be21ffa73cac3b95e274ac2bc2325df92 not found: ID does not exist" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.537649 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.543607 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wq2hm"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702169 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wfnlj"] Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702455 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702486 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702506 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702517 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702536 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702549 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702564 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702575 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702590 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702601 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702641 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702654 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702670 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702680 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702696 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702706 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702721 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702733 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702748 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702758 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-content" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702770 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702779 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702795 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702804 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="extract-utilities" Jan 26 23:13:23 crc kubenswrapper[4995]: E0126 23:13:23.702819 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702830 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702973 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.702991 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.703007 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" containerName="marketplace-operator" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.703019 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.703035 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" containerName="registry-server" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.704190 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.706361 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.718988 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfnlj"] Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.866933 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-catalog-content\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.867377 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-utilities\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.867453 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xbv\" (UniqueName: \"kubernetes.io/projected/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-kube-api-access-n9xbv\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.968937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-utilities\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.968997 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xbv\" (UniqueName: \"kubernetes.io/projected/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-kube-api-access-n9xbv\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.969064 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-catalog-content\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.969739 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-catalog-content\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.969808 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-utilities\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:23 crc kubenswrapper[4995]: I0126 23:13:23.992061 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xbv\" (UniqueName: \"kubernetes.io/projected/f956bbfb-557b-4b78-b2eb-141bdd1ca81f-kube-api-access-n9xbv\") pod \"certified-operators-wfnlj\" (UID: \"f956bbfb-557b-4b78-b2eb-141bdd1ca81f\") " pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.030785 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.201553 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" event={"ID":"d781053b-fcf3-44a7-812a-8af6c2c1ab07","Type":"ContainerStarted","Data":"00e0d2c13cb1c5db6d1970ab2569adf6dcc5fce78b5bad46984c10e13eeaf28d"} Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.201923 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.201936 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" event={"ID":"d781053b-fcf3-44a7-812a-8af6c2c1ab07","Type":"ContainerStarted","Data":"758bebf8d8a6cf3e6042b3f391e73a48f38cb9538f65d0792c2280e04765f12b"} Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.204698 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.218847 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vsjb7" podStartSLOduration=2.218819821 podStartE2EDuration="2.218819821s" podCreationTimestamp="2026-01-26 23:13:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:13:24.214968885 +0000 UTC m=+308.379676350" watchObservedRunningTime="2026-01-26 23:13:24.218819821 +0000 UTC m=+308.383527296" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.461587 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wfnlj"] Jan 26 23:13:24 crc kubenswrapper[4995]: W0126 23:13:24.473497 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf956bbfb_557b_4b78_b2eb_141bdd1ca81f.slice/crio-700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4 WatchSource:0}: Error finding container 700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4: Status 404 returned error can't find the container with id 700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4 Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.523289 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38be674d-6ae2-441d-b361-a9eea3b694a7" path="/var/lib/kubelet/pods/38be674d-6ae2-441d-b361-a9eea3b694a7/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.523917 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9a7b30-dccb-4753-81a1-622853d6ba3c" path="/var/lib/kubelet/pods/3f9a7b30-dccb-4753-81a1-622853d6ba3c/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.524413 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5166d9b5-534e-4426-8085-a1900c7bdafb" path="/var/lib/kubelet/pods/5166d9b5-534e-4426-8085-a1900c7bdafb/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.525498 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58513b5e-460e-4344-91e3-1d20e26fd533" path="/var/lib/kubelet/pods/58513b5e-460e-4344-91e3-1d20e26fd533/volumes" Jan 26 23:13:24 crc kubenswrapper[4995]: I0126 23:13:24.526072 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7295e1f-e3cb-4710-8763-b02b3e9ed67b" path="/var/lib/kubelet/pods/b7295e1f-e3cb-4710-8763-b02b3e9ed67b/volumes" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.215281 4995 generic.go:334] "Generic (PLEG): container finished" podID="f956bbfb-557b-4b78-b2eb-141bdd1ca81f" containerID="0bc0c1c748e963f659145c02e76ffb66acc022e851af0ac12bd0e010bad5980c" exitCode=0 Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.215395 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerDied","Data":"0bc0c1c748e963f659145c02e76ffb66acc022e851af0ac12bd0e010bad5980c"} Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.215454 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerStarted","Data":"700688f93c442f79a243178f374e615f654283fc7fe644d1556370284e5d9da4"} Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.503814 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-56ct7"] Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.513053 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.516903 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.524134 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56ct7"] Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.588614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vhr6\" (UniqueName: \"kubernetes.io/projected/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-kube-api-access-5vhr6\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.588836 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-catalog-content\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.588949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-utilities\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.690540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-catalog-content\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.690589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-utilities\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.690647 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vhr6\" (UniqueName: \"kubernetes.io/projected/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-kube-api-access-5vhr6\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.691319 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-utilities\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.691320 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-catalog-content\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.727740 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vhr6\" (UniqueName: \"kubernetes.io/projected/7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8-kube-api-access-5vhr6\") pod \"redhat-marketplace-56ct7\" (UID: \"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8\") " pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:25 crc kubenswrapper[4995]: I0126 23:13:25.841876 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.110653 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4fw5x"] Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.111768 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.113482 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.122052 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fw5x"] Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.208353 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-utilities\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.208686 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmkmj\" (UniqueName: \"kubernetes.io/projected/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-kube-api-access-hmkmj\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.208709 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-catalog-content\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.222573 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerStarted","Data":"869cc4a79b2582359f95828b43e2010f744e718dd65565aa853c6babb96088d9"} Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.239601 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-56ct7"] Jan 26 23:13:26 crc kubenswrapper[4995]: W0126 23:13:26.242282 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7af9b1ce_9df1_4d94_ae24_e8ff6cd5edb8.slice/crio-e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e WatchSource:0}: Error finding container e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e: Status 404 returned error can't find the container with id e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.310173 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-utilities\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.310237 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmkmj\" (UniqueName: \"kubernetes.io/projected/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-kube-api-access-hmkmj\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.310260 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-catalog-content\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.311193 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-utilities\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.311641 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-catalog-content\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.334215 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmkmj\" (UniqueName: \"kubernetes.io/projected/269f6fbd-326f-45d1-a1a6-ea5da5b7daff-kube-api-access-hmkmj\") pod \"redhat-operators-4fw5x\" (UID: \"269f6fbd-326f-45d1-a1a6-ea5da5b7daff\") " pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.430779 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:26 crc kubenswrapper[4995]: I0126 23:13:26.810360 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4fw5x"] Jan 26 23:13:26 crc kubenswrapper[4995]: W0126 23:13:26.817126 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod269f6fbd_326f_45d1_a1a6_ea5da5b7daff.slice/crio-728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78 WatchSource:0}: Error finding container 728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78: Status 404 returned error can't find the container with id 728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.229274 4995 generic.go:334] "Generic (PLEG): container finished" podID="f956bbfb-557b-4b78-b2eb-141bdd1ca81f" containerID="869cc4a79b2582359f95828b43e2010f744e718dd65565aa853c6babb96088d9" exitCode=0 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.229329 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerDied","Data":"869cc4a79b2582359f95828b43e2010f744e718dd65565aa853c6babb96088d9"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.230775 4995 generic.go:334] "Generic (PLEG): container finished" podID="7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8" containerID="6d1bc306abbb56bdfd3f785e9c32058825128c9ed36c3a755d3e9aa98945632a" exitCode=0 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.230854 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerDied","Data":"6d1bc306abbb56bdfd3f785e9c32058825128c9ed36c3a755d3e9aa98945632a"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.230884 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerStarted","Data":"e8e9e352a174904ba1b79ee6974c6e2452e4c36510e8ba8df1cfe3030411691e"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.233384 4995 generic.go:334] "Generic (PLEG): container finished" podID="269f6fbd-326f-45d1-a1a6-ea5da5b7daff" containerID="1dd8deea502b9435640c1b2d36aa01a07924105f54896af6152cc73b04c0fc94" exitCode=0 Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.233409 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerDied","Data":"1dd8deea502b9435640c1b2d36aa01a07924105f54896af6152cc73b04c0fc94"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.233429 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerStarted","Data":"728153c2747477434fd401c0cb9df70ab4b7751efb9b4d56be09e0326d5eda78"} Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.905055 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6tk5"] Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.906775 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.910602 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 23:13:27 crc kubenswrapper[4995]: I0126 23:13:27.914283 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6tk5"] Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.031284 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-utilities\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.031690 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-catalog-content\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.031747 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h9kc\" (UniqueName: \"kubernetes.io/projected/0d1ac969-80ec-4450-9f6d-0cca599d2185-kube-api-access-6h9kc\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.133439 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h9kc\" (UniqueName: \"kubernetes.io/projected/0d1ac969-80ec-4450-9f6d-0cca599d2185-kube-api-access-6h9kc\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.133530 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-utilities\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.133561 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-catalog-content\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.134039 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-utilities\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.134064 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d1ac969-80ec-4450-9f6d-0cca599d2185-catalog-content\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.153188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h9kc\" (UniqueName: \"kubernetes.io/projected/0d1ac969-80ec-4450-9f6d-0cca599d2185-kube-api-access-6h9kc\") pod \"community-operators-c6tk5\" (UID: \"0d1ac969-80ec-4450-9f6d-0cca599d2185\") " pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.239912 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wfnlj" event={"ID":"f956bbfb-557b-4b78-b2eb-141bdd1ca81f","Type":"ContainerStarted","Data":"1dba11e769a6270dac1d4d9c5a1002367207e3649a3decbe003296316a627578"} Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.242414 4995 generic.go:334] "Generic (PLEG): container finished" podID="7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8" containerID="d0a13caed867f469c2c5df040207299f53b719babd95d0b957e428ed8605f349" exitCode=0 Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.242503 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerDied","Data":"d0a13caed867f469c2c5df040207299f53b719babd95d0b957e428ed8605f349"} Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.249697 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerStarted","Data":"4c4d4ab22b25ca459dc854a7eab9fe7da37ae97131a1ff91022c2c0a4d09cecd"} Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.264082 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wfnlj" podStartSLOduration=2.646016178 podStartE2EDuration="5.264066804s" podCreationTimestamp="2026-01-26 23:13:23 +0000 UTC" firstStartedPulling="2026-01-26 23:13:25.219364706 +0000 UTC m=+309.384072211" lastFinishedPulling="2026-01-26 23:13:27.837415382 +0000 UTC m=+312.002122837" observedRunningTime="2026-01-26 23:13:28.2581474 +0000 UTC m=+312.422854875" watchObservedRunningTime="2026-01-26 23:13:28.264066804 +0000 UTC m=+312.428774269" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.264777 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:28 crc kubenswrapper[4995]: I0126 23:13:28.675740 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6tk5"] Jan 26 23:13:28 crc kubenswrapper[4995]: W0126 23:13:28.684411 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d1ac969_80ec_4450_9f6d_0cca599d2185.slice/crio-0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05 WatchSource:0}: Error finding container 0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05: Status 404 returned error can't find the container with id 0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05 Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.258251 4995 generic.go:334] "Generic (PLEG): container finished" podID="269f6fbd-326f-45d1-a1a6-ea5da5b7daff" containerID="4c4d4ab22b25ca459dc854a7eab9fe7da37ae97131a1ff91022c2c0a4d09cecd" exitCode=0 Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.258361 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerDied","Data":"4c4d4ab22b25ca459dc854a7eab9fe7da37ae97131a1ff91022c2c0a4d09cecd"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.260718 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d1ac969-80ec-4450-9f6d-0cca599d2185" containerID="15974069ca0e1129e5f854388495c052b9da6ee80619cbe10d1f0f69a0499ab8" exitCode=0 Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.260822 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerDied","Data":"15974069ca0e1129e5f854388495c052b9da6ee80619cbe10d1f0f69a0499ab8"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.260855 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerStarted","Data":"0980a508c33d810d48e6037f8ab68cd73013f60f2b3a4957cd2cd48dc5a3fa05"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.264124 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-56ct7" event={"ID":"7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8","Type":"ContainerStarted","Data":"aa61d4da104d762f1659cd2b569847ec6d832c90898dea7a7290f1d3ff663073"} Jan 26 23:13:29 crc kubenswrapper[4995]: I0126 23:13:29.300425 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-56ct7" podStartSLOduration=2.643693974 podStartE2EDuration="4.300405221s" podCreationTimestamp="2026-01-26 23:13:25 +0000 UTC" firstStartedPulling="2026-01-26 23:13:27.231873532 +0000 UTC m=+311.396580997" lastFinishedPulling="2026-01-26 23:13:28.888584789 +0000 UTC m=+313.053292244" observedRunningTime="2026-01-26 23:13:29.299923318 +0000 UTC m=+313.464630793" watchObservedRunningTime="2026-01-26 23:13:29.300405221 +0000 UTC m=+313.465112686" Jan 26 23:13:30 crc kubenswrapper[4995]: I0126 23:13:30.271187 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerStarted","Data":"6c2f8034351a807d7124964536fc47b671dfe729e217b054284202b6310a60f4"} Jan 26 23:13:30 crc kubenswrapper[4995]: I0126 23:13:30.275400 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4fw5x" event={"ID":"269f6fbd-326f-45d1-a1a6-ea5da5b7daff","Type":"ContainerStarted","Data":"e278f54b8414d0bcbf7e0030eb0f4b540676e7142ab41c203e4f3d401df653d3"} Jan 26 23:13:30 crc kubenswrapper[4995]: I0126 23:13:30.312986 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4fw5x" podStartSLOduration=1.925396978 podStartE2EDuration="4.312965128s" podCreationTimestamp="2026-01-26 23:13:26 +0000 UTC" firstStartedPulling="2026-01-26 23:13:27.23506578 +0000 UTC m=+311.399773265" lastFinishedPulling="2026-01-26 23:13:29.62263395 +0000 UTC m=+313.787341415" observedRunningTime="2026-01-26 23:13:30.30799331 +0000 UTC m=+314.472700775" watchObservedRunningTime="2026-01-26 23:13:30.312965128 +0000 UTC m=+314.477672593" Jan 26 23:13:31 crc kubenswrapper[4995]: I0126 23:13:31.282550 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d1ac969-80ec-4450-9f6d-0cca599d2185" containerID="6c2f8034351a807d7124964536fc47b671dfe729e217b054284202b6310a60f4" exitCode=0 Jan 26 23:13:31 crc kubenswrapper[4995]: I0126 23:13:31.283840 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerDied","Data":"6c2f8034351a807d7124964536fc47b671dfe729e217b054284202b6310a60f4"} Jan 26 23:13:33 crc kubenswrapper[4995]: I0126 23:13:33.296883 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6tk5" event={"ID":"0d1ac969-80ec-4450-9f6d-0cca599d2185","Type":"ContainerStarted","Data":"40458dc2242e78866fdf834bbc6ffea6d129bd8e9c66e43c21f285307c140255"} Jan 26 23:13:33 crc kubenswrapper[4995]: I0126 23:13:33.318446 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6tk5" podStartSLOduration=3.892994101 podStartE2EDuration="6.318429029s" podCreationTimestamp="2026-01-26 23:13:27 +0000 UTC" firstStartedPulling="2026-01-26 23:13:29.262166361 +0000 UTC m=+313.426873826" lastFinishedPulling="2026-01-26 23:13:31.687601259 +0000 UTC m=+315.852308754" observedRunningTime="2026-01-26 23:13:33.315004354 +0000 UTC m=+317.479711859" watchObservedRunningTime="2026-01-26 23:13:33.318429029 +0000 UTC m=+317.483136494" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.031561 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.031844 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.079951 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.139465 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8lxhv" Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.200147 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:13:34 crc kubenswrapper[4995]: I0126 23:13:34.347433 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wfnlj" Jan 26 23:13:35 crc kubenswrapper[4995]: I0126 23:13:35.842960 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:35 crc kubenswrapper[4995]: I0126 23:13:35.843382 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:35 crc kubenswrapper[4995]: I0126 23:13:35.901663 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.357175 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-56ct7" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.431838 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.431883 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:36 crc kubenswrapper[4995]: I0126 23:13:36.479926 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:37 crc kubenswrapper[4995]: I0126 23:13:37.357650 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4fw5x" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.265763 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.265804 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.306419 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:38 crc kubenswrapper[4995]: I0126 23:13:38.362246 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6tk5" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.249716 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" containerID="cri-o://5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662" gracePeriod=30 Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.466900 4995 generic.go:334] "Generic (PLEG): container finished" podID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerID="5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662" exitCode=0 Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.467041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerDied","Data":"5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662"} Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.672722 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.834677 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.834748 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.834827 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835142 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835236 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835291 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835334 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.835405 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") pod \"c5507dd1-0894-4d9b-982d-817ebbb0092d\" (UID: \"c5507dd1-0894-4d9b-982d-817ebbb0092d\") " Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.836618 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.836767 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.843826 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.844318 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.844590 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.847666 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l" (OuterVolumeSpecName: "kube-api-access-n7f2l") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "kube-api-access-n7f2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.850094 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.870755 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "c5507dd1-0894-4d9b-982d-817ebbb0092d" (UID: "c5507dd1-0894-4d9b-982d-817ebbb0092d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937440 4995 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c5507dd1-0894-4d9b-982d-817ebbb0092d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937491 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937504 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7f2l\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-kube-api-access-n7f2l\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937518 4995 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c5507dd1-0894-4d9b-982d-817ebbb0092d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937529 4995 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937542 4995 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 23:13:59 crc kubenswrapper[4995]: I0126 23:13:59.937551 4995 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c5507dd1-0894-4d9b-982d-817ebbb0092d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.475591 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" event={"ID":"c5507dd1-0894-4d9b-982d-817ebbb0092d","Type":"ContainerDied","Data":"c0781d7b5c2499fcb553527a8fd295fe436cb8680c543a89922297ff4d9b554f"} Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.475678 4995 scope.go:117] "RemoveContainer" containerID="5f6d3ec7b74d90b9b5fb45870ef587ee2f0fc428a2b3bcd5b815fc5bb39eb662" Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.475757 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-hjxrn" Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.538186 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:14:00 crc kubenswrapper[4995]: I0126 23:14:00.538246 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-hjxrn"] Jan 26 23:14:02 crc kubenswrapper[4995]: I0126 23:14:02.528901 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" path="/var/lib/kubelet/pods/c5507dd1-0894-4d9b-982d-817ebbb0092d/volumes" Jan 26 23:14:10 crc kubenswrapper[4995]: I0126 23:14:10.893990 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:14:10 crc kubenswrapper[4995]: I0126 23:14:10.894734 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:14:40 crc kubenswrapper[4995]: I0126 23:14:40.894391 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:14:40 crc kubenswrapper[4995]: I0126 23:14:40.895093 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.163746 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf"] Jan 26 23:15:00 crc kubenswrapper[4995]: E0126 23:15:00.164518 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.164536 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.164670 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5507dd1-0894-4d9b-982d-817ebbb0092d" containerName="registry" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.165213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.167354 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.167510 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.173332 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf"] Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.332985 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.333083 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.333146 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.434387 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.434476 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.434556 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.435336 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.442766 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.451326 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"collect-profiles-29491155-znsqf\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.483353 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.672789 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf"] Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.838069 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerStarted","Data":"1a29ecbf7c1dc8a0a44da58998a1ee9726769c1c2e698fc6c995631738b17836"} Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.838123 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerStarted","Data":"a3256bdf9b257aeb9d374e3b0ea9e090ff619c32c4090159693dd0fc5ce813ff"} Jan 26 23:15:00 crc kubenswrapper[4995]: I0126 23:15:00.853144 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" podStartSLOduration=0.85313057 podStartE2EDuration="853.13057ms" podCreationTimestamp="2026-01-26 23:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:15:00.850624828 +0000 UTC m=+405.015332303" watchObservedRunningTime="2026-01-26 23:15:00.85313057 +0000 UTC m=+405.017838035" Jan 26 23:15:01 crc kubenswrapper[4995]: I0126 23:15:01.846661 4995 generic.go:334] "Generic (PLEG): container finished" podID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerID="1a29ecbf7c1dc8a0a44da58998a1ee9726769c1c2e698fc6c995631738b17836" exitCode=0 Jan 26 23:15:01 crc kubenswrapper[4995]: I0126 23:15:01.847192 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerDied","Data":"1a29ecbf7c1dc8a0a44da58998a1ee9726769c1c2e698fc6c995631738b17836"} Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.031124 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.175785 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") pod \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.175938 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") pod \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.175972 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") pod \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\" (UID: \"eedc9650-dfb6-4f85-854a-c4f87310cdc9\") " Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.177016 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume" (OuterVolumeSpecName: "config-volume") pod "eedc9650-dfb6-4f85-854a-c4f87310cdc9" (UID: "eedc9650-dfb6-4f85-854a-c4f87310cdc9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.183969 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eedc9650-dfb6-4f85-854a-c4f87310cdc9" (UID: "eedc9650-dfb6-4f85-854a-c4f87310cdc9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.186457 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq" (OuterVolumeSpecName: "kube-api-access-t6kgq") pod "eedc9650-dfb6-4f85-854a-c4f87310cdc9" (UID: "eedc9650-dfb6-4f85-854a-c4f87310cdc9"). InnerVolumeSpecName "kube-api-access-t6kgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.278011 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6kgq\" (UniqueName: \"kubernetes.io/projected/eedc9650-dfb6-4f85-854a-c4f87310cdc9-kube-api-access-t6kgq\") on node \"crc\" DevicePath \"\"" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.278063 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eedc9650-dfb6-4f85-854a-c4f87310cdc9-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.278076 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eedc9650-dfb6-4f85-854a-c4f87310cdc9-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.860135 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" event={"ID":"eedc9650-dfb6-4f85-854a-c4f87310cdc9","Type":"ContainerDied","Data":"a3256bdf9b257aeb9d374e3b0ea9e090ff619c32c4090159693dd0fc5ce813ff"} Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.860202 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3256bdf9b257aeb9d374e3b0ea9e090ff619c32c4090159693dd0fc5ce813ff" Jan 26 23:15:03 crc kubenswrapper[4995]: I0126 23:15:03.860243 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491155-znsqf" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.893338 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.893737 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.893794 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.894531 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:15:10 crc kubenswrapper[4995]: I0126 23:15:10.894665 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29" gracePeriod=600 Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.916246 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29" exitCode=0 Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.916326 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29"} Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.917852 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab"} Jan 26 23:15:11 crc kubenswrapper[4995]: I0126 23:15:11.917921 4995 scope.go:117] "RemoveContainer" containerID="3297881486aa80f570c0e5c5ba26255015481d51bb357f96fd6df0b63bb1ec0c" Jan 26 23:17:40 crc kubenswrapper[4995]: I0126 23:17:40.893866 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:17:40 crc kubenswrapper[4995]: I0126 23:17:40.894746 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:18:10 crc kubenswrapper[4995]: I0126 23:18:10.893773 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:18:10 crc kubenswrapper[4995]: I0126 23:18:10.894680 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.893369 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.893994 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.894058 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.894678 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:18:40 crc kubenswrapper[4995]: I0126 23:18:40.894749 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab" gracePeriod=600 Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277532 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab" exitCode=0 Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab"} Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277880 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46"} Jan 26 23:18:41 crc kubenswrapper[4995]: I0126 23:18:41.277902 4995 scope.go:117] "RemoveContainer" containerID="91eb61e09ae5d6d6198d16f6e7e69e569eb136d572b2d062913b6b75ef9fce29" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.640617 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh"] Jan 26 23:18:50 crc kubenswrapper[4995]: E0126 23:18:50.641459 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerName="collect-profiles" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.641475 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerName="collect-profiles" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.641618 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedc9650-dfb6-4f85-854a-c4f87310cdc9" containerName="collect-profiles" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.642503 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.645833 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.656121 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh"] Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.745319 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.745421 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.745449 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846183 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846223 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846264 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846764 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.846970 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.882205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:50 crc kubenswrapper[4995]: I0126 23:18:50.960618 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:51 crc kubenswrapper[4995]: I0126 23:18:51.235905 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh"] Jan 26 23:18:51 crc kubenswrapper[4995]: I0126 23:18:51.358344 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerStarted","Data":"8277d63f614db9ae56e3251d7d5e84985fd410093cab7a766b2f9a9f29668959"} Jan 26 23:18:52 crc kubenswrapper[4995]: I0126 23:18:52.365397 4995 generic.go:334] "Generic (PLEG): container finished" podID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerID="85eb0fa94d63160740827492f277358e3378c81106c204db8d7e074a29c14217" exitCode=0 Jan 26 23:18:52 crc kubenswrapper[4995]: I0126 23:18:52.365480 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"85eb0fa94d63160740827492f277358e3378c81106c204db8d7e074a29c14217"} Jan 26 23:18:52 crc kubenswrapper[4995]: I0126 23:18:52.366875 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:18:54 crc kubenswrapper[4995]: I0126 23:18:54.379387 4995 generic.go:334] "Generic (PLEG): container finished" podID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerID="c2a02a28d3b2dfbaeacf2a40582479f8bde3db6c4aafa53f23e9dc18038ba3c1" exitCode=0 Jan 26 23:18:54 crc kubenswrapper[4995]: I0126 23:18:54.379515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"c2a02a28d3b2dfbaeacf2a40582479f8bde3db6c4aafa53f23e9dc18038ba3c1"} Jan 26 23:18:55 crc kubenswrapper[4995]: I0126 23:18:55.392444 4995 generic.go:334] "Generic (PLEG): container finished" podID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerID="484233adc44574c1d5f7ad430fc69ecaaf1d958455f2abe50410f50997ca946c" exitCode=0 Jan 26 23:18:55 crc kubenswrapper[4995]: I0126 23:18:55.392511 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"484233adc44574c1d5f7ad430fc69ecaaf1d958455f2abe50410f50997ca946c"} Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.747770 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.848616 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") pod \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.848707 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") pod \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.848815 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") pod \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\" (UID: \"388e02fc-e28d-4d4a-94ec-464eb7573a8d\") " Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.852384 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle" (OuterVolumeSpecName: "bundle") pod "388e02fc-e28d-4d4a-94ec-464eb7573a8d" (UID: "388e02fc-e28d-4d4a-94ec-464eb7573a8d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.857930 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c" (OuterVolumeSpecName: "kube-api-access-xkz4c") pod "388e02fc-e28d-4d4a-94ec-464eb7573a8d" (UID: "388e02fc-e28d-4d4a-94ec-464eb7573a8d"). InnerVolumeSpecName "kube-api-access-xkz4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.887613 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util" (OuterVolumeSpecName: "util") pod "388e02fc-e28d-4d4a-94ec-464eb7573a8d" (UID: "388e02fc-e28d-4d4a-94ec-464eb7573a8d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.949966 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.950016 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/388e02fc-e28d-4d4a-94ec-464eb7573a8d-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:18:56 crc kubenswrapper[4995]: I0126 23:18:56.950036 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkz4c\" (UniqueName: \"kubernetes.io/projected/388e02fc-e28d-4d4a-94ec-464eb7573a8d-kube-api-access-xkz4c\") on node \"crc\" DevicePath \"\"" Jan 26 23:18:57 crc kubenswrapper[4995]: I0126 23:18:57.415205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" event={"ID":"388e02fc-e28d-4d4a-94ec-464eb7573a8d","Type":"ContainerDied","Data":"8277d63f614db9ae56e3251d7d5e84985fd410093cab7a766b2f9a9f29668959"} Jan 26 23:18:57 crc kubenswrapper[4995]: I0126 23:18:57.415705 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8277d63f614db9ae56e3251d7d5e84985fd410093cab7a766b2f9a9f29668959" Jan 26 23:18:57 crc kubenswrapper[4995]: I0126 23:18:57.415298 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.381683 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382726 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" containerID="cri-o://681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382803 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" containerID="cri-o://01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382875 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382901 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" containerID="cri-o://f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.382970 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" containerID="cri-o://424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.383022 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" containerID="cri-o://756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.383002 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" containerID="cri-o://eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.484683 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" containerID="cri-o://e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" gracePeriod=30 Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.755963 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.763131 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-acl-logging/0.log" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.763798 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-controller/0.log" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.764260 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839579 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2d2rq"] Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839889 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="extract" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839914 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="extract" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839932 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="pull" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839943 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="pull" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839959 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.839971 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.839984 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="util" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840013 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="util" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840028 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840038 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840049 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840060 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840074 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840084 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840124 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840136 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840152 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840163 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840177 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840188 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840203 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840213 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840229 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840240 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840259 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kubecfg-setup" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840269 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kubecfg-setup" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840286 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840297 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840446 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="northd" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840460 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="388e02fc-e28d-4d4a-94ec-464eb7573a8d" containerName="extract" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840476 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840487 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840503 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="sbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840517 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-acl-logging" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840530 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840542 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="kube-rbac-proxy-node" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840556 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840567 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="nbdb" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840582 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovn-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: E0126 23:19:01.840717 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840729 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.840880 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerName="ovnkube-controller" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.844012 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.871755 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.871802 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.871864 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872009 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872087 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872299 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872458 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872483 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872510 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872507 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872535 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872550 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash" (OuterVolumeSpecName: "host-slash") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872563 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872584 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872606 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872638 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872631 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872660 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872685 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872713 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872727 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872745 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872761 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872783 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872805 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") pod \"be4486f1-6ac2-4655-aff8-634049c9aa6c\" (UID: \"be4486f1-6ac2-4655-aff8-634049c9aa6c\") " Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872934 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872955 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-config\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872977 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-env-overrides\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.872992 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-bin\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873016 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-netns\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873014 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873035 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873052 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-script-lib\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873071 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpklb\" (UniqueName: \"kubernetes.io/projected/f03c0b25-4269-4418-9106-08802fbf9f1a-kube-api-access-gpklb\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873110 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873128 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-ovn\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873143 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-etc-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-var-lib-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873181 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873187 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-node-log\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873221 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log" (OuterVolumeSpecName: "node-log") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873240 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-systemd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873269 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-log-socket\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873293 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-systemd-units\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873316 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-netd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873340 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-kubelet\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873446 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873475 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873483 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f03c0b25-4269-4418-9106-08802fbf9f1a-ovn-node-metrics-cert\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873493 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket" (OuterVolumeSpecName: "log-socket") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873507 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-slash\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873512 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873538 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873558 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873592 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873630 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873659 4995 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873670 4995 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873680 4995 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873690 4995 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873700 4995 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873708 4995 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873716 4995 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873724 4995 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873732 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873740 4995 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873748 4995 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873756 4995 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873764 4995 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.873772 4995 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.885286 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.885360 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z" (OuterVolumeSpecName: "kube-api-access-ngr8z") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "kube-api-access-ngr8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.897039 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "be4486f1-6ac2-4655-aff8-634049c9aa6c" (UID: "be4486f1-6ac2-4655-aff8-634049c9aa6c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975008 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-systemd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975149 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-log-socket\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975189 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-log-socket\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975140 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-systemd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975202 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-systemd-units\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-systemd-units\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-netd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975367 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-netd\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975384 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f03c0b25-4269-4418-9106-08802fbf9f1a-ovn-node-metrics-cert\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975465 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-kubelet\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975496 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-slash\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975564 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-config\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975575 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-kubelet\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975601 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-env-overrides\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975603 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-slash\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975626 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-bin\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975654 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-cni-bin\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-netns\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975682 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975721 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975749 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-script-lib\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpklb\" (UniqueName: \"kubernetes.io/projected/f03c0b25-4269-4418-9106-08802fbf9f1a-kube-api-access-gpklb\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975821 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-etc-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975842 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975862 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-ovn\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975889 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-var-lib-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975930 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-node-log\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975986 4995 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/be4486f1-6ac2-4655-aff8-634049c9aa6c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976004 4995 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976017 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngr8z\" (UniqueName: \"kubernetes.io/projected/be4486f1-6ac2-4655-aff8-634049c9aa6c-kube-api-access-ngr8z\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976030 4995 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976041 4995 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976052 4995 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/be4486f1-6ac2-4655-aff8-634049c9aa6c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976083 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-node-log\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975752 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-run-netns\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.975777 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976441 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-env-overrides\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976467 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-etc-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976489 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-ovn\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976501 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-var-lib-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976521 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f03c0b25-4269-4418-9106-08802fbf9f1a-run-openvswitch\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-script-lib\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.976726 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f03c0b25-4269-4418-9106-08802fbf9f1a-ovnkube-config\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.980019 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f03c0b25-4269-4418-9106-08802fbf9f1a-ovn-node-metrics-cert\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:01 crc kubenswrapper[4995]: I0126 23:19:01.997089 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpklb\" (UniqueName: \"kubernetes.io/projected/f03c0b25-4269-4418-9106-08802fbf9f1a-kube-api-access-gpklb\") pod \"ovnkube-node-2d2rq\" (UID: \"f03c0b25-4269-4418-9106-08802fbf9f1a\") " pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.161337 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:02 crc kubenswrapper[4995]: W0126 23:19:02.196290 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03c0b25_4269_4418_9106_08802fbf9f1a.slice/crio-19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2 WatchSource:0}: Error finding container 19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2: Status 404 returned error can't find the container with id 19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.444821 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03c0b25-4269-4418-9106-08802fbf9f1a" containerID="cf472edd5a152f510eb6e5f79e8533795602599493a77c5746f934a3ea9233e1" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.444851 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerDied","Data":"cf472edd5a152f510eb6e5f79e8533795602599493a77c5746f934a3ea9233e1"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.445158 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"19fac327ba6abd9994dd32d4df3687589da88106fbd236094ed36ded9daa9ff2"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.447633 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/1.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448292 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/0.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448359 4995 generic.go:334] "Generic (PLEG): container finished" podID="4ba70657-ea12-4a85-9ec3-c1423b5b6912" containerID="c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23" exitCode=2 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448482 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerDied","Data":"c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448593 4995 scope.go:117] "RemoveContainer" containerID="cd386742613389a9da858406fdef7a6c9499ca90b654d08456cae945a00d2c81" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.448990 4995 scope.go:117] "RemoveContainer" containerID="c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.449304 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hln88_openshift-multus(4ba70657-ea12-4a85-9ec3-c1423b5b6912)\"" pod="openshift-multus/multus-hln88" podUID="4ba70657-ea12-4a85-9ec3-c1423b5b6912" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.455735 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovnkube-controller/2.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.459327 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-acl-logging/0.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.459961 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l9xmp_be4486f1-6ac2-4655-aff8-634049c9aa6c/ovn-controller/0.log" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460452 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460490 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460507 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460521 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460534 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460547 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" exitCode=0 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460560 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" exitCode=143 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460574 4995 generic.go:334] "Generic (PLEG): container finished" podID="be4486f1-6ac2-4655-aff8-634049c9aa6c" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" exitCode=143 Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460605 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460670 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460691 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460710 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460731 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460752 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460772 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460783 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460794 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460805 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460815 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460826 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460837 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460849 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460860 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460874 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460891 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460903 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460913 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460925 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460935 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460945 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460956 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460966 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460976 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.460987 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461001 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461018 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461031 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461043 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461053 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461064 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461075 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461086 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461096 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461144 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461159 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461179 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" event={"ID":"be4486f1-6ac2-4655-aff8-634049c9aa6c","Type":"ContainerDied","Data":"0108074f5a92b88611ab160f29c724e30a5806d5f87702c7dcc0e14bc5062f52"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461204 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461216 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461227 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461238 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461248 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461258 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461270 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461281 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461291 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461301 4995 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.461444 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l9xmp" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.506698 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.556714 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.567290 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l9xmp"] Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.579330 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.610940 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.630337 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.647197 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.667362 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.692951 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.743717 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.782076 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.814638 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.868063 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869145 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869194 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869218 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869424 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869444 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869472 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869657 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869676 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869704 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.869890 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869909 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.869936 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.870138 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870173 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870187 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.870442 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870462 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870491 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.870666 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870686 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.870697 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.871977 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.871997 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872026 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.872337 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872385 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872418 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: E0126 23:19:02.872711 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872768 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.872787 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873007 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873025 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873338 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873356 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873553 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873570 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873804 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.873817 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874392 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874412 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874666 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.874685 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.875548 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.875593 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.876859 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.876889 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877234 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877254 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877696 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.877720 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878031 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878050 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878312 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878331 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878529 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878547 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878779 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.878818 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884078 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884130 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884440 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.884477 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885257 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885300 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885554 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885588 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885928 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.885947 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886321 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886353 4995 scope.go:117] "RemoveContainer" containerID="e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886630 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e"} err="failed to get container status \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": rpc error: code = NotFound desc = could not find container \"e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e\": container with ID starting with e780731ff462804fa64e1d2b280f719e9647e646ac7e4cc0a9977ed5da0d659e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.886654 4995 scope.go:117] "RemoveContainer" containerID="09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887042 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad"} err="failed to get container status \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": rpc error: code = NotFound desc = could not find container \"09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad\": container with ID starting with 09727ef1eea0fa484074eb373f31132f2ba94426476d24d24d5f5d1dc94ad8ad not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887063 4995 scope.go:117] "RemoveContainer" containerID="f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887378 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e"} err="failed to get container status \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": rpc error: code = NotFound desc = could not find container \"f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e\": container with ID starting with f8a0db4c45d113f3975abaa1043b191fc17f986319c9edb22b797ec7794fd77e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887397 4995 scope.go:117] "RemoveContainer" containerID="01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887645 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7"} err="failed to get container status \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": rpc error: code = NotFound desc = could not find container \"01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7\": container with ID starting with 01ce7d9589da1f475442a2d378794e2ae2a40dcecad998f31df5f1a03200dba7 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887678 4995 scope.go:117] "RemoveContainer" containerID="eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887934 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845"} err="failed to get container status \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": rpc error: code = NotFound desc = could not find container \"eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845\": container with ID starting with eb6f85fc5f4352310e606a9c8e941b3e50b58abe5667f977f0c40efdd7ef7845 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.887951 4995 scope.go:117] "RemoveContainer" containerID="4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888137 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f"} err="failed to get container status \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": rpc error: code = NotFound desc = could not find container \"4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f\": container with ID starting with 4f1e715ce6e5948bbd9357954cb9f3af799d6875aca4823d3a659f087cba2a1f not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888151 4995 scope.go:117] "RemoveContainer" containerID="424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888330 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6"} err="failed to get container status \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": rpc error: code = NotFound desc = could not find container \"424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6\": container with ID starting with 424eb2c4afadd1c2792ec5b4ad556928ca01f047498f94db1e9ec44416bfc5f6 not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888347 4995 scope.go:117] "RemoveContainer" containerID="756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888621 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde"} err="failed to get container status \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": rpc error: code = NotFound desc = could not find container \"756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde\": container with ID starting with 756e1557b335dd9365955c4af628f4b41dd52d73bb31a1d395f604f2fe507cde not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.888654 4995 scope.go:117] "RemoveContainer" containerID="681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.891761 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e"} err="failed to get container status \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": rpc error: code = NotFound desc = could not find container \"681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e\": container with ID starting with 681828c5341be905a270ff455c6eaaa410df56ccbe6e5d35acf3f1441661d59e not found: ID does not exist" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.891797 4995 scope.go:117] "RemoveContainer" containerID="0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab" Jan 26 23:19:02 crc kubenswrapper[4995]: I0126 23:19:02.892403 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab"} err="failed to get container status \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": rpc error: code = NotFound desc = could not find container \"0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab\": container with ID starting with 0e0b6ad120651eef34662fa4fffdd5df3e1b446503f04fb14f27a4162cf8a6ab not found: ID does not exist" Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469217 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"07a55a4f1d8fa7240764895ffcdbc647325fe2e614a028960c1e25473169e72e"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469254 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"19d6f00caedca58ea3b184fa38255377d1758417479bbc5b588d479bb2ed5d42"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"a4c8ba38af5ae0c8e752248a3ef313c3f4a7c9f30f1d2d3fba8d51f9b23de9c4"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469278 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"34eefd48899ef598accbf5d9b83684486030f06f9132dc791c6b44c99f28bd1c"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469287 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"b71cde1cae14bf2eb6974116e3b69ab08ea76eede35f0107d983ae68d3d375a2"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.469296 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"68666f537246ef7da037a59ed97650ccf9118ad001a96c6222c1f7bcaabd7221"} Jan 26 23:19:03 crc kubenswrapper[4995]: I0126 23:19:03.472166 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/1.log" Jan 26 23:19:04 crc kubenswrapper[4995]: I0126 23:19:04.523185 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be4486f1-6ac2-4655-aff8-634049c9aa6c" path="/var/lib/kubelet/pods/be4486f1-6ac2-4655-aff8-634049c9aa6c/volumes" Jan 26 23:19:06 crc kubenswrapper[4995]: I0126 23:19:06.491127 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"e0392f1d2242398feacbe8979fcf410689a06392e803ff3247d88000f0700e9a"} Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.722154 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4"] Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.723078 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.724948 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.725225 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-4fv56" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.726987 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.769350 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8sqb\" (UniqueName: \"kubernetes.io/projected/a1c71758-f818-4fd6-a985-4aa33488e96c-kube-api-access-s8sqb\") pod \"obo-prometheus-operator-68bc856cb9-zfmp4\" (UID: \"a1c71758-f818-4fd6-a985-4aa33488e96c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.846801 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r"] Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.847563 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.850342 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.850507 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-nbphx" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.867047 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g"] Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.867821 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.870200 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.870295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8sqb\" (UniqueName: \"kubernetes.io/projected/a1c71758-f818-4fd6-a985-4aa33488e96c-kube-api-access-s8sqb\") pod \"obo-prometheus-operator-68bc856cb9-zfmp4\" (UID: \"a1c71758-f818-4fd6-a985-4aa33488e96c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.870362 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.902581 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8sqb\" (UniqueName: \"kubernetes.io/projected/a1c71758-f818-4fd6-a985-4aa33488e96c-kube-api-access-s8sqb\") pod \"obo-prometheus-operator-68bc856cb9-zfmp4\" (UID: \"a1c71758-f818-4fd6-a985-4aa33488e96c\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972189 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972252 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972316 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.972343 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.976633 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:07 crc kubenswrapper[4995]: I0126 23:19:07.984644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/684ae2c3-240e-4b73-9aaa-391ad824f47d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r\" (UID: \"684ae2c3-240e-4b73-9aaa-391ad824f47d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.035899 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.036025 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g4lwc"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.036876 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.039279 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6x8tq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.039495 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080737 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080820 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/549a554b-0ef6-4d8b-b2cf-4445474572d2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080868 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.080931 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvw9\" (UniqueName: \"kubernetes.io/projected/549a554b-0ef6-4d8b-b2cf-4445474572d2-kube-api-access-ndvw9\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.084137 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.084154 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f936e96-9a6c-4e10-97a1-ccbf7e8c14de-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g\" (UID: \"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087076 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087138 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087157 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.087193 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(d261bfad987a3e1eb80e548db1bfbd0f12d4bde3177781b2331f91c3940dad34): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" podUID="a1c71758-f818-4fd6-a985-4aa33488e96c" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.164353 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.182088 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/549a554b-0ef6-4d8b-b2cf-4445474572d2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.182396 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvw9\" (UniqueName: \"kubernetes.io/projected/549a554b-0ef6-4d8b-b2cf-4445474572d2-kube-api-access-ndvw9\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.182824 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.186061 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/549a554b-0ef6-4d8b-b2cf-4445474572d2-observability-operator-tls\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186253 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186310 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186330 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.186369 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(0bb9887f4b2c2bded796b9c87bd12eaf08d320e2a172dfe3039610bef55a57a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" podUID="684ae2c3-240e-4b73-9aaa-391ad824f47d" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.199518 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvw9\" (UniqueName: \"kubernetes.io/projected/549a554b-0ef6-4d8b-b2cf-4445474572d2-kube-api-access-ndvw9\") pod \"observability-operator-59bdc8b94-g4lwc\" (UID: \"549a554b-0ef6-4d8b-b2cf-4445474572d2\") " pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201310 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201370 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201391 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.201439 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(24cfa2e077858f4e8240b0e85e85f71a6af791215f459c90f010ff271fa43c2a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" podUID="4f936e96-9a6c-4e10-97a1-ccbf7e8c14de" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.243820 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ngw26"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.244478 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.248511 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nbwrf" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.283656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lxnv\" (UniqueName: \"kubernetes.io/projected/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-kube-api-access-5lxnv\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.283752 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.384919 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.384974 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lxnv\" (UniqueName: \"kubernetes.io/projected/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-kube-api-access-5lxnv\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.385993 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.395689 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.402987 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lxnv\" (UniqueName: \"kubernetes.io/projected/f8710ec9-2fc5-400b-83d0-0411f6e7fdc8-kube-api-access-5lxnv\") pod \"perses-operator-5bf474d74f-ngw26\" (UID: \"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8\") " pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.417639 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.417824 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.417922 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.418043 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(265ee90a4a9e5c3722bf282496b7d076023c020c09df2e088208512074a58615): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" podUID="549a554b-0ef6-4d8b-b2cf-4445474572d2" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.505066 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" event={"ID":"f03c0b25-4269-4418-9106-08802fbf9f1a","Type":"ContainerStarted","Data":"9846f84591fb6d95e62aa5a937a988c9ee991c667a387c6ec778d409a3a0043f"} Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.505487 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.505537 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.539931 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" podStartSLOduration=7.5399179830000005 podStartE2EDuration="7.539917983s" podCreationTimestamp="2026-01-26 23:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:19:08.539766979 +0000 UTC m=+652.704474444" watchObservedRunningTime="2026-01-26 23:19:08.539917983 +0000 UTC m=+652.704625448" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.545776 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.563434 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583064 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583138 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583160 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.583197 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(c97dd20ddb18a66158ad553864ce1acc029c08cd3c0b1c934345b184f9568d25): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" podUID="f8710ec9-2fc5-400b-83d0-0411f6e7fdc8" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.692444 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.692544 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.692847 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.696778 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.696897 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.697315 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.701260 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ngw26"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.709086 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.710548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.710913 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.716059 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g4lwc"] Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.716213 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: I0126 23:19:08.716730 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752055 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752132 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752154 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.752197 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators(4f936e96-9a6c-4e10-97a1-ccbf7e8c14de)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_openshift-operators_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de_0(fec867562b361b5b9536e553c675145f44d9b417c4ffc37004b90898e1cc1b6b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" podUID="4f936e96-9a6c-4e10-97a1-ccbf7e8c14de" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760265 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760325 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760347 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.760388 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators(a1c71758-f818-4fd6-a985-4aa33488e96c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-zfmp4_openshift-operators_a1c71758-f818-4fd6-a985-4aa33488e96c_0(335d6e08a7e28056033339d3997d4b5a532b90fb22657bb4e94d5dcf9e9ca250): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" podUID="a1c71758-f818-4fd6-a985-4aa33488e96c" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781328 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781421 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781449 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.781514 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators(684ae2c3-240e-4b73-9aaa-391ad824f47d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_openshift-operators_684ae2c3-240e-4b73-9aaa-391ad824f47d_0(69fc5a34ccedd2f409dfe351acbdca3d6d341d9849ac347e4ad3833a6d994bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" podUID="684ae2c3-240e-4b73-9aaa-391ad824f47d" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793475 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793517 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793538 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:08 crc kubenswrapper[4995]: E0126 23:19:08.793576 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-g4lwc_openshift-operators(549a554b-0ef6-4d8b-b2cf-4445474572d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-g4lwc_openshift-operators_549a554b-0ef6-4d8b-b2cf-4445474572d2_0(53767d6f43194f3193f17d7623de429b07a07da31553c174e372a04ed975b099): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" podUID="549a554b-0ef6-4d8b-b2cf-4445474572d2" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.510707 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.511273 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.511794 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535418 4995 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535503 4995 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535530 4995 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:09 crc kubenswrapper[4995]: E0126 23:19:09.535610 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-ngw26_openshift-operators(f8710ec9-2fc5-400b-83d0-0411f6e7fdc8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-ngw26_openshift-operators_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8_0(e211785d999f5b0877ca4a0425cec50d15973584631d5cf6ff6ebc0a19011499): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" podUID="f8710ec9-2fc5-400b-83d0-0411f6e7fdc8" Jan 26 23:19:09 crc kubenswrapper[4995]: I0126 23:19:09.556788 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:13 crc kubenswrapper[4995]: I0126 23:19:13.517116 4995 scope.go:117] "RemoveContainer" containerID="c1c729b92e56f57861fb9e9cb3255d4e859441764e1404ed6d2ec73d8bf2cc23" Jan 26 23:19:14 crc kubenswrapper[4995]: I0126 23:19:14.562581 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hln88_4ba70657-ea12-4a85-9ec3-c1423b5b6912/kube-multus/1.log" Jan 26 23:19:14 crc kubenswrapper[4995]: I0126 23:19:14.562995 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hln88" event={"ID":"4ba70657-ea12-4a85-9ec3-c1423b5b6912","Type":"ContainerStarted","Data":"fa90d0287da0bbe24975f7263e98dcd40797e5854c53a5fc11a45864231005f7"} Jan 26 23:19:16 crc kubenswrapper[4995]: I0126 23:19:16.776222 4995 scope.go:117] "RemoveContainer" containerID="866b4e150df34bb856c7909125a903ef3e4e3722c867e9f3bd61353008835213" Jan 26 23:19:19 crc kubenswrapper[4995]: I0126 23:19:19.516756 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:19 crc kubenswrapper[4995]: I0126 23:19:19.517975 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" Jan 26 23:19:19 crc kubenswrapper[4995]: I0126 23:19:19.961716 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g"] Jan 26 23:19:20 crc kubenswrapper[4995]: I0126 23:19:20.610644 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" event={"ID":"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de","Type":"ContainerStarted","Data":"b316deca6816f1dfc397ed716c14619d5400dedf4a6f09333ebdc9a1bcdaeb57"} Jan 26 23:19:21 crc kubenswrapper[4995]: I0126 23:19:21.516548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:21 crc kubenswrapper[4995]: I0126 23:19:21.517251 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" Jan 26 23:19:22 crc kubenswrapper[4995]: I0126 23:19:22.010770 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4"] Jan 26 23:19:22 crc kubenswrapper[4995]: W0126 23:19:22.020410 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1c71758_f818_4fd6_a985_4aa33488e96c.slice/crio-45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973 WatchSource:0}: Error finding container 45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973: Status 404 returned error can't find the container with id 45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973 Jan 26 23:19:22 crc kubenswrapper[4995]: I0126 23:19:22.624980 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" event={"ID":"a1c71758-f818-4fd6-a985-4aa33488e96c","Type":"ContainerStarted","Data":"45e5128677a7a365bfd7938b933476308ccb29588150d16fb8d7d45add106973"} Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.517084 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.517466 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.517889 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:23 crc kubenswrapper[4995]: I0126 23:19:23.518595 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" Jan 26 23:19:24 crc kubenswrapper[4995]: I0126 23:19:24.516655 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:24 crc kubenswrapper[4995]: I0126 23:19:24.517636 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.071060 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ngw26"] Jan 26 23:19:25 crc kubenswrapper[4995]: W0126 23:19:25.080436 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8710ec9_2fc5_400b_83d0_0411f6e7fdc8.slice/crio-3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4 WatchSource:0}: Error finding container 3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4: Status 404 returned error can't find the container with id 3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4 Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.118035 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-g4lwc"] Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.289846 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r"] Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.651731 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" event={"ID":"4f936e96-9a6c-4e10-97a1-ccbf7e8c14de","Type":"ContainerStarted","Data":"a4bfe53113fddbce132dfdaf6928778be5968cb4d1607a135032d2f5a01ca7e8"} Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.653019 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" event={"ID":"549a554b-0ef6-4d8b-b2cf-4445474572d2","Type":"ContainerStarted","Data":"7e91df246ea222c267f0287e4f6f25205ba3a9238306a4a5d06f220ae1b5d453"} Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.654196 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" event={"ID":"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8","Type":"ContainerStarted","Data":"3cfd826e47cd24c0fce1f3861079b2de8b895f301a330ae7e3a837e6bc1ebee4"} Jan 26 23:19:25 crc kubenswrapper[4995]: I0126 23:19:25.684401 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g" podStartSLOduration=13.802845018 podStartE2EDuration="18.684376391s" podCreationTimestamp="2026-01-26 23:19:07 +0000 UTC" firstStartedPulling="2026-01-26 23:19:19.975864944 +0000 UTC m=+664.140572449" lastFinishedPulling="2026-01-26 23:19:24.857396357 +0000 UTC m=+669.022103822" observedRunningTime="2026-01-26 23:19:25.678072093 +0000 UTC m=+669.842779558" watchObservedRunningTime="2026-01-26 23:19:25.684376391 +0000 UTC m=+669.849083886" Jan 26 23:19:25 crc kubenswrapper[4995]: W0126 23:19:25.949709 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684ae2c3_240e_4b73_9aaa_391ad824f47d.slice/crio-15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d WatchSource:0}: Error finding container 15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d: Status 404 returned error can't find the container with id 15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.673434 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" event={"ID":"684ae2c3-240e-4b73-9aaa-391ad824f47d","Type":"ContainerStarted","Data":"2418ea308002b3729fdd946d7908ee6ca2946e91bf0feb8e68726653317797e0"} Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.673477 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" event={"ID":"684ae2c3-240e-4b73-9aaa-391ad824f47d","Type":"ContainerStarted","Data":"15b5ba3eb2648d44973d9a417a9675ac2f7b66068828c591bf763fd81d3b6b0d"} Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.684578 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" event={"ID":"a1c71758-f818-4fd6-a985-4aa33488e96c","Type":"ContainerStarted","Data":"3daa3378178ea242c56404676343a5072503c94bf19589f95f11b05a99fa0d46"} Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.699946 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r" podStartSLOduration=19.699923752 podStartE2EDuration="19.699923752s" podCreationTimestamp="2026-01-26 23:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:19:26.694702941 +0000 UTC m=+670.859410406" watchObservedRunningTime="2026-01-26 23:19:26.699923752 +0000 UTC m=+670.864631227" Jan 26 23:19:26 crc kubenswrapper[4995]: I0126 23:19:26.718827 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-zfmp4" podStartSLOduration=15.712679721 podStartE2EDuration="19.718809406s" podCreationTimestamp="2026-01-26 23:19:07 +0000 UTC" firstStartedPulling="2026-01-26 23:19:22.031406458 +0000 UTC m=+666.196113923" lastFinishedPulling="2026-01-26 23:19:26.037536143 +0000 UTC m=+670.202243608" observedRunningTime="2026-01-26 23:19:26.71137779 +0000 UTC m=+670.876085255" watchObservedRunningTime="2026-01-26 23:19:26.718809406 +0000 UTC m=+670.883516871" Jan 26 23:19:28 crc kubenswrapper[4995]: I0126 23:19:28.697145 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" event={"ID":"f8710ec9-2fc5-400b-83d0-0411f6e7fdc8","Type":"ContainerStarted","Data":"23946c88c7a1327f3b646682d6d7d8c31d7b6312e27929cb8855d516895b04b2"} Jan 26 23:19:28 crc kubenswrapper[4995]: I0126 23:19:28.697529 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:28 crc kubenswrapper[4995]: I0126 23:19:28.724912 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" podStartSLOduration=18.209376369 podStartE2EDuration="20.724897269s" podCreationTimestamp="2026-01-26 23:19:08 +0000 UTC" firstStartedPulling="2026-01-26 23:19:25.0824238 +0000 UTC m=+669.247131255" lastFinishedPulling="2026-01-26 23:19:27.59794469 +0000 UTC m=+671.762652155" observedRunningTime="2026-01-26 23:19:28.720397226 +0000 UTC m=+672.885104691" watchObservedRunningTime="2026-01-26 23:19:28.724897269 +0000 UTC m=+672.889604734" Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.724220 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" event={"ID":"549a554b-0ef6-4d8b-b2cf-4445474572d2","Type":"ContainerStarted","Data":"52ed7660fd32105bec83a0d06ff30ba4ff37bd33829aac8e732bc60ca2c4685e"} Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.726365 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.773031 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" podStartSLOduration=17.827378072 podStartE2EDuration="23.773011057s" podCreationTimestamp="2026-01-26 23:19:08 +0000 UTC" firstStartedPulling="2026-01-26 23:19:25.146867309 +0000 UTC m=+669.311574774" lastFinishedPulling="2026-01-26 23:19:31.092500284 +0000 UTC m=+675.257207759" observedRunningTime="2026-01-26 23:19:31.770878734 +0000 UTC m=+675.935586209" watchObservedRunningTime="2026-01-26 23:19:31.773011057 +0000 UTC m=+675.937718522" Jan 26 23:19:31 crc kubenswrapper[4995]: I0126 23:19:31.787433 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-g4lwc" Jan 26 23:19:32 crc kubenswrapper[4995]: I0126 23:19:32.189198 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2d2rq" Jan 26 23:19:38 crc kubenswrapper[4995]: I0126 23:19:38.567326 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-ngw26" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.363403 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6"] Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.364757 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.366810 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.375062 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6"] Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.425746 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.426022 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.426152 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.527659 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.528144 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.528487 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.528746 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.529170 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.551361 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:39 crc kubenswrapper[4995]: I0126 23:19:39.682787 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.155461 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6"] Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.780250 4995 generic.go:334] "Generic (PLEG): container finished" podID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerID="3de19504031a85b67e608f56cd36e235a222a4ad8949a92aee0a3bfe9a3e9411" exitCode=0 Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.780412 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"3de19504031a85b67e608f56cd36e235a222a4ad8949a92aee0a3bfe9a3e9411"} Jan 26 23:19:40 crc kubenswrapper[4995]: I0126 23:19:40.780583 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerStarted","Data":"5d8d8f6eaf76cdd240751561dafd4d23164ab64ffb884727a96f9b38147f8f99"} Jan 26 23:19:42 crc kubenswrapper[4995]: I0126 23:19:42.794938 4995 generic.go:334] "Generic (PLEG): container finished" podID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerID="f660fb4ddbb9aa17c2ee2fa2bf46bff0879a10ca4eaf436e01da1c898740a1f9" exitCode=0 Jan 26 23:19:42 crc kubenswrapper[4995]: I0126 23:19:42.795125 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"f660fb4ddbb9aa17c2ee2fa2bf46bff0879a10ca4eaf436e01da1c898740a1f9"} Jan 26 23:19:43 crc kubenswrapper[4995]: I0126 23:19:43.803045 4995 generic.go:334] "Generic (PLEG): container finished" podID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerID="877ea637719eb54168db9f6e40b163aaba2f1e7f8b850b115b754f53459be5ae" exitCode=0 Jan 26 23:19:43 crc kubenswrapper[4995]: I0126 23:19:43.803144 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"877ea637719eb54168db9f6e40b163aaba2f1e7f8b850b115b754f53459be5ae"} Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.092719 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197176 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") pod \"c72f27ba-28b4-41be-a2e3-894496ce06fb\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197428 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") pod \"c72f27ba-28b4-41be-a2e3-894496ce06fb\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197467 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") pod \"c72f27ba-28b4-41be-a2e3-894496ce06fb\" (UID: \"c72f27ba-28b4-41be-a2e3-894496ce06fb\") " Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.197954 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle" (OuterVolumeSpecName: "bundle") pod "c72f27ba-28b4-41be-a2e3-894496ce06fb" (UID: "c72f27ba-28b4-41be-a2e3-894496ce06fb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.209322 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d" (OuterVolumeSpecName: "kube-api-access-tl58d") pod "c72f27ba-28b4-41be-a2e3-894496ce06fb" (UID: "c72f27ba-28b4-41be-a2e3-894496ce06fb"). InnerVolumeSpecName "kube-api-access-tl58d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.216867 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util" (OuterVolumeSpecName: "util") pod "c72f27ba-28b4-41be-a2e3-894496ce06fb" (UID: "c72f27ba-28b4-41be-a2e3-894496ce06fb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.298661 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl58d\" (UniqueName: \"kubernetes.io/projected/c72f27ba-28b4-41be-a2e3-894496ce06fb-kube-api-access-tl58d\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.298716 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.298736 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c72f27ba-28b4-41be-a2e3-894496ce06fb-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.831898 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" event={"ID":"c72f27ba-28b4-41be-a2e3-894496ce06fb","Type":"ContainerDied","Data":"5d8d8f6eaf76cdd240751561dafd4d23164ab64ffb884727a96f9b38147f8f99"} Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.831940 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d8d8f6eaf76cdd240751561dafd4d23164ab64ffb884727a96f9b38147f8f99" Jan 26 23:19:45 crc kubenswrapper[4995]: I0126 23:19:45.832002 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.028709 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnp66"] Jan 26 23:19:51 crc kubenswrapper[4995]: E0126 23:19:51.030039 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="util" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030080 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="util" Jan 26 23:19:51 crc kubenswrapper[4995]: E0126 23:19:51.030135 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="pull" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030145 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="pull" Jan 26 23:19:51 crc kubenswrapper[4995]: E0126 23:19:51.030162 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="extract" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030169 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="extract" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.030426 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c72f27ba-28b4-41be-a2e3-894496ce06fb" containerName="extract" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.031321 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.036255 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-mk8g8" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.036523 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.036592 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.050605 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnp66"] Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.177825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kmh9\" (UniqueName: \"kubernetes.io/projected/1f224cbd-cdf6-474c-bcc6-a37358dcd4f5-kube-api-access-2kmh9\") pod \"nmstate-operator-646758c888-fnp66\" (UID: \"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.279537 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kmh9\" (UniqueName: \"kubernetes.io/projected/1f224cbd-cdf6-474c-bcc6-a37358dcd4f5-kube-api-access-2kmh9\") pod \"nmstate-operator-646758c888-fnp66\" (UID: \"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.308460 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kmh9\" (UniqueName: \"kubernetes.io/projected/1f224cbd-cdf6-474c-bcc6-a37358dcd4f5-kube-api-access-2kmh9\") pod \"nmstate-operator-646758c888-fnp66\" (UID: \"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5\") " pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.353166 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.611591 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-fnp66"] Jan 26 23:19:51 crc kubenswrapper[4995]: I0126 23:19:51.877572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" event={"ID":"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5","Type":"ContainerStarted","Data":"28787f53f246ec398a9fd01fe2a23032a814cfa5c0fe212950d47683f130f68c"} Jan 26 23:19:54 crc kubenswrapper[4995]: I0126 23:19:54.899225 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" event={"ID":"1f224cbd-cdf6-474c-bcc6-a37358dcd4f5","Type":"ContainerStarted","Data":"14ee2e36a2349337d284da497f7a139f7893006d0f993d02cf103a07010b2374"} Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.280126 4995 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.536068 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-fnp66" podStartSLOduration=6.847795236 podStartE2EDuration="9.536047574s" podCreationTimestamp="2026-01-26 23:19:51 +0000 UTC" firstStartedPulling="2026-01-26 23:19:51.629311538 +0000 UTC m=+695.794019013" lastFinishedPulling="2026-01-26 23:19:54.317563846 +0000 UTC m=+698.482271351" observedRunningTime="2026-01-26 23:19:54.937879219 +0000 UTC m=+699.102586724" watchObservedRunningTime="2026-01-26 23:20:00.536047574 +0000 UTC m=+704.700755029" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.539783 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-75scl"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.546467 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.547185 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.550302 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.553630 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-ctbps" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.553943 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.556704 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.560535 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-75scl"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.614283 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shgqg\" (UniqueName: \"kubernetes.io/projected/4adb027e-2869-4cbc-bdb7-63ae41659c28-kube-api-access-shgqg\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.614649 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7lgc\" (UniqueName: \"kubernetes.io/projected/49297381-c6bb-4ede-9f80-38ee237f7a3e-kube-api-access-p7lgc\") pod \"nmstate-metrics-54757c584b-75scl\" (UID: \"49297381-c6bb-4ede-9f80-38ee237f7a3e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.614700 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.622298 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-4nqd8"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.622959 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.692874 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.693529 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.695467 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.695698 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.695823 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nfqk5" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.705126 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d"] Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.715708 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.715922 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-ovs-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.715990 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-nmstate-lock\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716089 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4zd\" (UniqueName: \"kubernetes.io/projected/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-kube-api-access-fv4zd\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shgqg\" (UniqueName: \"kubernetes.io/projected/4adb027e-2869-4cbc-bdb7-63ae41659c28-kube-api-access-shgqg\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716294 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-dbus-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.716363 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7lgc\" (UniqueName: \"kubernetes.io/projected/49297381-c6bb-4ede-9f80-38ee237f7a3e-kube-api-access-p7lgc\") pod \"nmstate-metrics-54757c584b-75scl\" (UID: \"49297381-c6bb-4ede-9f80-38ee237f7a3e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: E0126 23:20:00.715875 4995 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 23:20:00 crc kubenswrapper[4995]: E0126 23:20:00.716594 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair podName:4adb027e-2869-4cbc-bdb7-63ae41659c28 nodeName:}" failed. No retries permitted until 2026-01-26 23:20:01.216555219 +0000 UTC m=+705.381262684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jkj8f" (UID: "4adb027e-2869-4cbc-bdb7-63ae41659c28") : secret "openshift-nmstate-webhook" not found Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.733564 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7lgc\" (UniqueName: \"kubernetes.io/projected/49297381-c6bb-4ede-9f80-38ee237f7a3e-kube-api-access-p7lgc\") pod \"nmstate-metrics-54757c584b-75scl\" (UID: \"49297381-c6bb-4ede-9f80-38ee237f7a3e\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.733658 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shgqg\" (UniqueName: \"kubernetes.io/projected/4adb027e-2869-4cbc-bdb7-63ae41659c28-kube-api-access-shgqg\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817298 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-nmstate-lock\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817480 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfnw\" (UniqueName: \"kubernetes.io/projected/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-kube-api-access-6dfnw\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817578 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4zd\" (UniqueName: \"kubernetes.io/projected/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-kube-api-access-fv4zd\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817408 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-nmstate-lock\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817805 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-dbus-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.817986 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-ovs-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.818069 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-ovs-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.818208 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-dbus-socket\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.834929 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4zd\" (UniqueName: \"kubernetes.io/projected/8bd5c3be-b641-437a-9aad-bcd9a7dd2c56-kube-api-access-fv4zd\") pod \"nmstate-handler-4nqd8\" (UID: \"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56\") " pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.903712 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.918537 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfnw\" (UniqueName: \"kubernetes.io/projected/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-kube-api-access-6dfnw\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.918581 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.918598 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.919440 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.922573 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.934979 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.942054 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfnw\" (UniqueName: \"kubernetes.io/projected/fa9c3198-27d3-4733-8c9c-ccc6f0168f0d-kube-api-access-6dfnw\") pod \"nmstate-console-plugin-7754f76f8b-8rf6d\" (UID: \"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:00 crc kubenswrapper[4995]: I0126 23:20:00.995596 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.002152 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.008873 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.024975 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122299 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122351 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122379 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122415 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122460 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122489 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.122562 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.135336 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-75scl"] Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.140756 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49297381_c6bb_4ede_9f80_38ee237f7a3e.slice/crio-bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34 WatchSource:0}: Error finding container bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34: Status 404 returned error can't find the container with id bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34 Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223927 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223953 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.223978 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224016 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224053 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224125 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.224770 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d"] Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.226348 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.226591 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.227147 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.227188 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.229246 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.229428 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/4adb027e-2869-4cbc-bdb7-63ae41659c28-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jkj8f\" (UID: \"4adb027e-2869-4cbc-bdb7-63ae41659c28\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.229848 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa9c3198_27d3_4733_8c9c_ccc6f0168f0d.slice/crio-d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752 WatchSource:0}: Error finding container d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752: Status 404 returned error can't find the container with id d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752 Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.230324 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.242281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"console-567f8c8d56-2j2x6\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.324781 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.514708 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.726355 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.728302 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05869402_35d4_4054_845a_e45b6e9ed633.slice/crio-caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4 WatchSource:0}: Error finding container caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4: Status 404 returned error can't find the container with id caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4 Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.734332 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f"] Jan 26 23:20:01 crc kubenswrapper[4995]: W0126 23:20:01.746929 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4adb027e_2869_4cbc_bdb7_63ae41659c28.slice/crio-c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e WatchSource:0}: Error finding container c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e: Status 404 returned error can't find the container with id c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.955217 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerStarted","Data":"caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.957988 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" event={"ID":"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d","Type":"ContainerStarted","Data":"d1c471300885d10b760f69050e2b1875cc1ea4446b9a5d8100d0b032dd0f7752"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.959479 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" event={"ID":"49297381-c6bb-4ede-9f80-38ee237f7a3e","Type":"ContainerStarted","Data":"bb78b9b24e7faec5ff8c5a1a343c8a7befa855ee14becb479a73113aae658c34"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.960788 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4nqd8" event={"ID":"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56","Type":"ContainerStarted","Data":"dfa7d601151afd6a7670af153f9e71fc238de824a9821b1aef6e095f9b2b0b0d"} Jan 26 23:20:01 crc kubenswrapper[4995]: I0126 23:20:01.961764 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" event={"ID":"4adb027e-2869-4cbc-bdb7-63ae41659c28","Type":"ContainerStarted","Data":"c381efa49c0ea08ee874eb08478cef338480cca39d8259bb4508d13243bedf4e"} Jan 26 23:20:03 crc kubenswrapper[4995]: I0126 23:20:03.978013 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerStarted","Data":"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95"} Jan 26 23:20:03 crc kubenswrapper[4995]: I0126 23:20:03.999771 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-567f8c8d56-2j2x6" podStartSLOduration=3.999753772 podStartE2EDuration="3.999753772s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:20:03.997341592 +0000 UTC m=+708.162049067" watchObservedRunningTime="2026-01-26 23:20:03.999753772 +0000 UTC m=+708.164461237" Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.985243 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-4nqd8" event={"ID":"8bd5c3be-b641-437a-9aad-bcd9a7dd2c56","Type":"ContainerStarted","Data":"aae766fb357642ec2264a826a097d355e45d839f0e5c577cc0ed08f009d637ee"} Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.985605 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.987248 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" event={"ID":"4adb027e-2869-4cbc-bdb7-63ae41659c28","Type":"ContainerStarted","Data":"2cff811e4dfa4cc38dfa5cbeaa63f1f2cc63cc727c266e0d3042ff36631c4dee"} Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.987321 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.988968 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" event={"ID":"fa9c3198-27d3-4733-8c9c-ccc6f0168f0d","Type":"ContainerStarted","Data":"9e4ea379d5cd920f701d2b94ef9dedc5a1589b0828a33b78d3ec2901830164f2"} Jan 26 23:20:04 crc kubenswrapper[4995]: I0126 23:20:04.990251 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" event={"ID":"49297381-c6bb-4ede-9f80-38ee237f7a3e","Type":"ContainerStarted","Data":"f2d4fd9f8770438367fbe46ba91520e0b7936b237e3c17054982aa702abe2a3a"} Jan 26 23:20:05 crc kubenswrapper[4995]: I0126 23:20:05.002184 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-4nqd8" podStartSLOduration=1.386037766 podStartE2EDuration="5.002167663s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:00.978390876 +0000 UTC m=+705.143098351" lastFinishedPulling="2026-01-26 23:20:04.594520773 +0000 UTC m=+708.759228248" observedRunningTime="2026-01-26 23:20:05.001475046 +0000 UTC m=+709.166182521" watchObservedRunningTime="2026-01-26 23:20:05.002167663 +0000 UTC m=+709.166875128" Jan 26 23:20:05 crc kubenswrapper[4995]: I0126 23:20:05.018314 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8rf6d" podStartSLOduration=1.653426943 podStartE2EDuration="5.018268048s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:01.231901534 +0000 UTC m=+705.396608999" lastFinishedPulling="2026-01-26 23:20:04.596742599 +0000 UTC m=+708.761450104" observedRunningTime="2026-01-26 23:20:05.01717237 +0000 UTC m=+709.181879835" watchObservedRunningTime="2026-01-26 23:20:05.018268048 +0000 UTC m=+709.182975513" Jan 26 23:20:05 crc kubenswrapper[4995]: I0126 23:20:05.053861 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" podStartSLOduration=2.208569208 podStartE2EDuration="5.053837191s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:01.74925996 +0000 UTC m=+705.913967435" lastFinishedPulling="2026-01-26 23:20:04.594527913 +0000 UTC m=+708.759235418" observedRunningTime="2026-01-26 23:20:05.050375824 +0000 UTC m=+709.215083289" watchObservedRunningTime="2026-01-26 23:20:05.053837191 +0000 UTC m=+709.218544666" Jan 26 23:20:08 crc kubenswrapper[4995]: I0126 23:20:08.015215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" event={"ID":"49297381-c6bb-4ede-9f80-38ee237f7a3e","Type":"ContainerStarted","Data":"2c9356a49cde6bb0a7c112a24a2bfc45c00bd81169ec1d79dcfd484f120bc591"} Jan 26 23:20:10 crc kubenswrapper[4995]: I0126 23:20:10.975868 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-4nqd8" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.004142 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-75scl" podStartSLOduration=4.674345938 podStartE2EDuration="11.004091241s" podCreationTimestamp="2026-01-26 23:20:00 +0000 UTC" firstStartedPulling="2026-01-26 23:20:01.142777355 +0000 UTC m=+705.307484820" lastFinishedPulling="2026-01-26 23:20:07.472522618 +0000 UTC m=+711.637230123" observedRunningTime="2026-01-26 23:20:08.0398748 +0000 UTC m=+712.204582305" watchObservedRunningTime="2026-01-26 23:20:11.004091241 +0000 UTC m=+715.168798746" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.325649 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.325753 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:11 crc kubenswrapper[4995]: I0126 23:20:11.334030 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:12 crc kubenswrapper[4995]: I0126 23:20:12.053735 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:20:12 crc kubenswrapper[4995]: I0126 23:20:12.126780 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:20:21 crc kubenswrapper[4995]: I0126 23:20:21.522215 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jkj8f" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.740317 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m"] Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.742409 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.744251 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.751256 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m"] Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.844980 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.845268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.845432 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946371 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946456 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946481 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.946960 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.947077 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:35 crc kubenswrapper[4995]: I0126 23:20:35.966281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:36 crc kubenswrapper[4995]: I0126 23:20:36.064968 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:36 crc kubenswrapper[4995]: I0126 23:20:36.354213 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m"] Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.178960 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-zt9nn" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" containerID="cri-o://4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" gracePeriod=15 Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.238843 4995 generic.go:334] "Generic (PLEG): container finished" podID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerID="4b1362bd825c081f5b994a1e689f02ccd29ba4f887dc007d7b96688f60cdfc9b" exitCode=0 Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.238910 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"4b1362bd825c081f5b994a1e689f02ccd29ba4f887dc007d7b96688f60cdfc9b"} Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.238953 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerStarted","Data":"4d3e8b9e3d5aefbf68934c0abcdc1540b05aa2219fd460a9f65757f103f5b9f6"} Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.997185 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zt9nn_e80b6b9d-3bfd-4315-8643-695c2101bddb/console/0.log" Jan 26 23:20:37 crc kubenswrapper[4995]: I0126 23:20:37.997517 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076508 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076565 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076622 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076719 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076758 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076827 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.076868 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") pod \"e80b6b9d-3bfd-4315-8643-695c2101bddb\" (UID: \"e80b6b9d-3bfd-4315-8643-695c2101bddb\") " Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077491 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca" (OuterVolumeSpecName: "service-ca") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077557 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.077612 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config" (OuterVolumeSpecName: "console-config") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.083751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.083788 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr" (OuterVolumeSpecName: "kube-api-access-tt5qr") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "kube-api-access-tt5qr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.085400 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e80b6b9d-3bfd-4315-8643-695c2101bddb" (UID: "e80b6b9d-3bfd-4315-8643-695c2101bddb"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.111221 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:20:38 crc kubenswrapper[4995]: E0126 23:20:38.112570 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.112606 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.113303 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerName="console" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.127545 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.127384 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178195 4995 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178239 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178252 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tt5qr\" (UniqueName: \"kubernetes.io/projected/e80b6b9d-3bfd-4315-8643-695c2101bddb-kube-api-access-tt5qr\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178263 4995 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178273 4995 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e80b6b9d-3bfd-4315-8643-695c2101bddb-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178285 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.178297 4995 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e80b6b9d-3bfd-4315-8643-695c2101bddb-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246499 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-zt9nn_e80b6b9d-3bfd-4315-8643-695c2101bddb/console/0.log" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246553 4995 generic.go:334] "Generic (PLEG): container finished" podID="e80b6b9d-3bfd-4315-8643-695c2101bddb" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" exitCode=2 Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerDied","Data":"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042"} Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246626 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-zt9nn" event={"ID":"e80b6b9d-3bfd-4315-8643-695c2101bddb","Type":"ContainerDied","Data":"f8da331ad5479ba2deada0b967ed7ea0fd7ef2bec4a402a501182d5512dc16e8"} Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246649 4995 scope.go:117] "RemoveContainer" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.246708 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-zt9nn" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.267154 4995 scope.go:117] "RemoveContainer" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" Jan 26 23:20:38 crc kubenswrapper[4995]: E0126 23:20:38.267587 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042\": container with ID starting with 4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042 not found: ID does not exist" containerID="4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.267635 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042"} err="failed to get container status \"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042\": rpc error: code = NotFound desc = could not find container \"4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042\": container with ID starting with 4297d9f35da42714e4fbdfbbb5d6d03d9289196e3daa3adaaa4b15864d188042 not found: ID does not exist" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.279271 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.279341 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.279374 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.281831 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.285588 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-zt9nn"] Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381044 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381378 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381422 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381840 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.381907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.400445 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"redhat-operators-fr95r\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.457752 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.524594 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e80b6b9d-3bfd-4315-8643-695c2101bddb" path="/var/lib/kubelet/pods/e80b6b9d-3bfd-4315-8643-695c2101bddb/volumes" Jan 26 23:20:38 crc kubenswrapper[4995]: I0126 23:20:38.667899 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.254598 4995 generic.go:334] "Generic (PLEG): container finished" podID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerID="c32abd6d376b09e4aa0e4ed7e261fe97d4985391e608bb9887cb7657a7cec8bf" exitCode=0 Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.254654 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"c32abd6d376b09e4aa0e4ed7e261fe97d4985391e608bb9887cb7657a7cec8bf"} Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.256574 4995 generic.go:334] "Generic (PLEG): container finished" podID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" exitCode=0 Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.256619 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5"} Jan 26 23:20:39 crc kubenswrapper[4995]: I0126 23:20:39.256657 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerStarted","Data":"466e4ff2680bff531e80450abec354d669e22023a880a1983570609fe3fd89c0"} Jan 26 23:20:40 crc kubenswrapper[4995]: I0126 23:20:40.266723 4995 generic.go:334] "Generic (PLEG): container finished" podID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerID="26d24ca8d6c2a866bc51af3ac1d29df06ef85fd35aca521cfa360d493e37a644" exitCode=0 Jan 26 23:20:40 crc kubenswrapper[4995]: I0126 23:20:40.266809 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"26d24ca8d6c2a866bc51af3ac1d29df06ef85fd35aca521cfa360d493e37a644"} Jan 26 23:20:40 crc kubenswrapper[4995]: I0126 23:20:40.269734 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerStarted","Data":"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583"} Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.281341 4995 generic.go:334] "Generic (PLEG): container finished" podID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" exitCode=0 Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.281411 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583"} Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.578882 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.750601 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") pod \"a59475a0-c56e-4d7d-a062-2a9b7188a601\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.751065 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") pod \"a59475a0-c56e-4d7d-a062-2a9b7188a601\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.751131 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") pod \"a59475a0-c56e-4d7d-a062-2a9b7188a601\" (UID: \"a59475a0-c56e-4d7d-a062-2a9b7188a601\") " Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.752975 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle" (OuterVolumeSpecName: "bundle") pod "a59475a0-c56e-4d7d-a062-2a9b7188a601" (UID: "a59475a0-c56e-4d7d-a062-2a9b7188a601"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.759256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8" (OuterVolumeSpecName: "kube-api-access-6wfk8") pod "a59475a0-c56e-4d7d-a062-2a9b7188a601" (UID: "a59475a0-c56e-4d7d-a062-2a9b7188a601"). InnerVolumeSpecName "kube-api-access-6wfk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.780522 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util" (OuterVolumeSpecName: "util") pod "a59475a0-c56e-4d7d-a062-2a9b7188a601" (UID: "a59475a0-c56e-4d7d-a062-2a9b7188a601"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.853254 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wfk8\" (UniqueName: \"kubernetes.io/projected/a59475a0-c56e-4d7d-a062-2a9b7188a601-kube-api-access-6wfk8\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.853305 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:41 crc kubenswrapper[4995]: I0126 23:20:41.853327 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a59475a0-c56e-4d7d-a062-2a9b7188a601-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.290812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" event={"ID":"a59475a0-c56e-4d7d-a062-2a9b7188a601","Type":"ContainerDied","Data":"4d3e8b9e3d5aefbf68934c0abcdc1540b05aa2219fd460a9f65757f103f5b9f6"} Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.290852 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d3e8b9e3d5aefbf68934c0abcdc1540b05aa2219fd460a9f65757f103f5b9f6" Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.290861 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m" Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.292858 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerStarted","Data":"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3"} Jan 26 23:20:42 crc kubenswrapper[4995]: I0126 23:20:42.314432 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fr95r" podStartSLOduration=1.833928284 podStartE2EDuration="4.31441901s" podCreationTimestamp="2026-01-26 23:20:38 +0000 UTC" firstStartedPulling="2026-01-26 23:20:39.25787373 +0000 UTC m=+743.422581205" lastFinishedPulling="2026-01-26 23:20:41.738364456 +0000 UTC m=+745.903071931" observedRunningTime="2026-01-26 23:20:42.314159133 +0000 UTC m=+746.478866638" watchObservedRunningTime="2026-01-26 23:20:42.31441901 +0000 UTC m=+746.479126475" Jan 26 23:20:48 crc kubenswrapper[4995]: I0126 23:20:48.458797 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:48 crc kubenswrapper[4995]: I0126 23:20:48.459462 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:49 crc kubenswrapper[4995]: I0126 23:20:49.523737 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fr95r" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" probeResult="failure" output=< Jan 26 23:20:49 crc kubenswrapper[4995]: timeout: failed to connect service ":50051" within 1s Jan 26 23:20:49 crc kubenswrapper[4995]: > Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.200964 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z"] Jan 26 23:20:51 crc kubenswrapper[4995]: E0126 23:20:51.208988 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="extract" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209042 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="extract" Jan 26 23:20:51 crc kubenswrapper[4995]: E0126 23:20:51.209077 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="util" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209085 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="util" Jan 26 23:20:51 crc kubenswrapper[4995]: E0126 23:20:51.209124 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="pull" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209133 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="pull" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.209411 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59475a0-c56e-4d7d-a062-2a9b7188a601" containerName="extract" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.210203 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.237262 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.237644 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.238190 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.238364 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.238547 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-8h6dp" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.245041 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z"] Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.382285 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-webhook-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.382547 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdxw8\" (UniqueName: \"kubernetes.io/projected/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-kube-api-access-kdxw8\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.382578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-apiservice-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.451699 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9"] Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.452565 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.455058 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.458012 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-tvm9l" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.458362 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.469219 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9"] Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486545 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-webhook-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486603 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbpkd\" (UniqueName: \"kubernetes.io/projected/191e8757-940a-4e3e-a884-f5935f9f8201-kube-api-access-mbpkd\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486670 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-webhook-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486699 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdxw8\" (UniqueName: \"kubernetes.io/projected/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-kube-api-access-kdxw8\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486731 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-apiservice-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.486766 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-apiservice-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.492939 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-apiservice-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.493430 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-webhook-cert\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.503060 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdxw8\" (UniqueName: \"kubernetes.io/projected/b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9-kube-api-access-kdxw8\") pod \"metallb-operator-controller-manager-9666f9f76-p9s9z\" (UID: \"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9\") " pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.537958 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.587735 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-apiservice-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.587787 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-webhook-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.587813 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbpkd\" (UniqueName: \"kubernetes.io/projected/191e8757-940a-4e3e-a884-f5935f9f8201-kube-api-access-mbpkd\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.591291 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-apiservice-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.592159 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/191e8757-940a-4e3e-a884-f5935f9f8201-webhook-cert\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.611082 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbpkd\" (UniqueName: \"kubernetes.io/projected/191e8757-940a-4e3e-a884-f5935f9f8201-kube-api-access-mbpkd\") pod \"metallb-operator-webhook-server-79fc76bd5c-vctw9\" (UID: \"191e8757-940a-4e3e-a884-f5935f9f8201\") " pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.760582 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z"] Jan 26 23:20:51 crc kubenswrapper[4995]: W0126 23:20:51.770147 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb70f3de5_9e6d_465f_b6c3_b9eb12eba2d9.slice/crio-7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b WatchSource:0}: Error finding container 7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b: Status 404 returned error can't find the container with id 7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b Jan 26 23:20:51 crc kubenswrapper[4995]: I0126 23:20:51.772543 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:52 crc kubenswrapper[4995]: I0126 23:20:52.027322 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9"] Jan 26 23:20:52 crc kubenswrapper[4995]: W0126 23:20:52.033160 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod191e8757_940a_4e3e_a884_f5935f9f8201.slice/crio-095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5 WatchSource:0}: Error finding container 095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5: Status 404 returned error can't find the container with id 095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5 Jan 26 23:20:52 crc kubenswrapper[4995]: I0126 23:20:52.365215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" event={"ID":"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9","Type":"ContainerStarted","Data":"7de43e024bf737f9db5257681a0a8de501619623d7cb79d7773e0eb13061ac1b"} Jan 26 23:20:52 crc kubenswrapper[4995]: I0126 23:20:52.366410 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" event={"ID":"191e8757-940a-4e3e-a884-f5935f9f8201","Type":"ContainerStarted","Data":"095360b0733341f2812ead593aee66baafee0a0a1292f0d435e49dfaaf23e1e5"} Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.397591 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" event={"ID":"191e8757-940a-4e3e-a884-f5935f9f8201","Type":"ContainerStarted","Data":"98b36fd5d8e05e25e9891d2baa14df8d47f0c89dea4c1d9da6e14119b1efab91"} Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.398347 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.398994 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" event={"ID":"b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9","Type":"ContainerStarted","Data":"2afd929c9c4ae68acbafa81ff63b02088309bfe1a47b564f1cde8ada3ed1b29c"} Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.399746 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.426682 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" podStartSLOduration=1.997566972 podStartE2EDuration="6.426664567s" podCreationTimestamp="2026-01-26 23:20:51 +0000 UTC" firstStartedPulling="2026-01-26 23:20:52.036066585 +0000 UTC m=+756.200774050" lastFinishedPulling="2026-01-26 23:20:56.46516418 +0000 UTC m=+760.629871645" observedRunningTime="2026-01-26 23:20:57.423533347 +0000 UTC m=+761.588240812" watchObservedRunningTime="2026-01-26 23:20:57.426664567 +0000 UTC m=+761.591372032" Jan 26 23:20:57 crc kubenswrapper[4995]: I0126 23:20:57.445389 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" podStartSLOduration=1.888058811 podStartE2EDuration="6.44537138s" podCreationTimestamp="2026-01-26 23:20:51 +0000 UTC" firstStartedPulling="2026-01-26 23:20:51.782035588 +0000 UTC m=+755.946743053" lastFinishedPulling="2026-01-26 23:20:56.339348157 +0000 UTC m=+760.504055622" observedRunningTime="2026-01-26 23:20:57.441618685 +0000 UTC m=+761.606326150" watchObservedRunningTime="2026-01-26 23:20:57.44537138 +0000 UTC m=+761.610078845" Jan 26 23:20:58 crc kubenswrapper[4995]: I0126 23:20:58.499490 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:58 crc kubenswrapper[4995]: I0126 23:20:58.539381 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:20:58 crc kubenswrapper[4995]: I0126 23:20:58.748407 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.415845 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fr95r" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" containerID="cri-o://d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" gracePeriod=2 Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.829441 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.934573 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") pod \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.934631 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") pod \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.934711 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") pod \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\" (UID: \"c64724ab-40c4-4f05-a58b-a8ce4b5ece57\") " Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.935633 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities" (OuterVolumeSpecName: "utilities") pod "c64724ab-40c4-4f05-a58b-a8ce4b5ece57" (UID: "c64724ab-40c4-4f05-a58b-a8ce4b5ece57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:00 crc kubenswrapper[4995]: I0126 23:21:00.941831 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj" (OuterVolumeSpecName: "kube-api-access-gnbcj") pod "c64724ab-40c4-4f05-a58b-a8ce4b5ece57" (UID: "c64724ab-40c4-4f05-a58b-a8ce4b5ece57"). InnerVolumeSpecName "kube-api-access-gnbcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.036065 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnbcj\" (UniqueName: \"kubernetes.io/projected/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-kube-api-access-gnbcj\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.036118 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.056930 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c64724ab-40c4-4f05-a58b-a8ce4b5ece57" (UID: "c64724ab-40c4-4f05-a58b-a8ce4b5ece57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.137881 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c64724ab-40c4-4f05-a58b-a8ce4b5ece57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423364 4995 generic.go:334] "Generic (PLEG): container finished" podID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" exitCode=0 Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423420 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fr95r" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423416 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3"} Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fr95r" event={"ID":"c64724ab-40c4-4f05-a58b-a8ce4b5ece57","Type":"ContainerDied","Data":"466e4ff2680bff531e80450abec354d669e22023a880a1983570609fe3fd89c0"} Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.423837 4995 scope.go:117] "RemoveContainer" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.440295 4995 scope.go:117] "RemoveContainer" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.456203 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.461052 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fr95r"] Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.469302 4995 scope.go:117] "RemoveContainer" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.483967 4995 scope.go:117] "RemoveContainer" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" Jan 26 23:21:01 crc kubenswrapper[4995]: E0126 23:21:01.484596 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3\": container with ID starting with d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3 not found: ID does not exist" containerID="d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.484665 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3"} err="failed to get container status \"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3\": rpc error: code = NotFound desc = could not find container \"d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3\": container with ID starting with d937f664ef3ffce58580e85ce43ad37a62d4b32593e3aeaa3099c9e1a9af53f3 not found: ID does not exist" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.484711 4995 scope.go:117] "RemoveContainer" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" Jan 26 23:21:01 crc kubenswrapper[4995]: E0126 23:21:01.485257 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583\": container with ID starting with 9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583 not found: ID does not exist" containerID="9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.485299 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583"} err="failed to get container status \"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583\": rpc error: code = NotFound desc = could not find container \"9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583\": container with ID starting with 9f0eac7df50c93f9ca9fdc2c2865cf5a510e6526b80ce69148f702c06a70f583 not found: ID does not exist" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.485331 4995 scope.go:117] "RemoveContainer" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" Jan 26 23:21:01 crc kubenswrapper[4995]: E0126 23:21:01.485832 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5\": container with ID starting with 7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5 not found: ID does not exist" containerID="7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5" Jan 26 23:21:01 crc kubenswrapper[4995]: I0126 23:21:01.485878 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5"} err="failed to get container status \"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5\": rpc error: code = NotFound desc = could not find container \"7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5\": container with ID starting with 7482e62e2f6cec3d2783d38ea571e7624403acd981803974987875da222d2dd5 not found: ID does not exist" Jan 26 23:21:02 crc kubenswrapper[4995]: I0126 23:21:02.527879 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" path="/var/lib/kubelet/pods/c64724ab-40c4-4f05-a58b-a8ce4b5ece57/volumes" Jan 26 23:21:10 crc kubenswrapper[4995]: I0126 23:21:10.893965 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:21:10 crc kubenswrapper[4995]: I0126 23:21:10.894729 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:21:11 crc kubenswrapper[4995]: I0126 23:21:11.778379 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79fc76bd5c-vctw9" Jan 26 23:21:31 crc kubenswrapper[4995]: I0126 23:21:31.541354 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-9666f9f76-p9s9z" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440519 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9"] Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.440780 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440795 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.440819 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-utilities" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440827 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-utilities" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.440835 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-content" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440843 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="extract-content" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.440942 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c64724ab-40c4-4f05-a58b-a8ce4b5ece57" containerName="registry-server" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.441345 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.442955 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-h7fjj" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.443335 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.450993 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-lt5dg"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.453959 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.456418 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.456447 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.465340 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.536610 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-jlkxq"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.537647 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539301 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539429 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539436 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.539664 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sf9mf" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.542825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-726qp\" (UniqueName: \"kubernetes.io/projected/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-kube-api-access-726qp\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.542864 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.549793 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-hp8cv"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.552034 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.555915 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.583506 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hp8cv"] Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643814 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-cert\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643853 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-startup\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643913 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snck7\" (UniqueName: \"kubernetes.io/projected/11187758-87a2-4879-8421-5d9cdc4fd8bd-kube-api-access-snck7\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.643991 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-726qp\" (UniqueName: \"kubernetes.io/projected/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-kube-api-access-726qp\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644058 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644153 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644178 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644271 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-reloader\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4768de9d-be12-4b0b-9bd1-03f127a1a557-metallb-excludel2\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644324 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxtth\" (UniqueName: \"kubernetes.io/projected/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-kube-api-access-zxtth\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644344 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-metrics-certs\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644359 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmz9\" (UniqueName: \"kubernetes.io/projected/4768de9d-be12-4b0b-9bd1-03f127a1a557-kube-api-access-pxmz9\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644389 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644427 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-conf\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644445 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.644514 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-sockets\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.655972 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.660268 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-726qp\" (UniqueName: \"kubernetes.io/projected/d71dd2bc-e8c9-4a37-9096-35a1f19333f8-kube-api-access-726qp\") pod \"frr-k8s-webhook-server-7df86c4f6c-bqkf9\" (UID: \"d71dd2bc-e8c9-4a37-9096-35a1f19333f8\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.745985 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746321 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-conf\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746344 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746374 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-sockets\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746397 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-cert\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746415 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-startup\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746440 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snck7\" (UniqueName: \"kubernetes.io/projected/11187758-87a2-4879-8421-5d9cdc4fd8bd-kube-api-access-snck7\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746461 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746466 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746483 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746526 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-reloader\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4768de9d-be12-4b0b-9bd1-03f127a1a557-metallb-excludel2\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746556 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxtth\" (UniqueName: \"kubernetes.io/projected/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-kube-api-access-zxtth\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746571 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-metrics-certs\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746588 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmz9\" (UniqueName: \"kubernetes.io/projected/4768de9d-be12-4b0b-9bd1-03f127a1a557-kube-api-access-pxmz9\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746616 4995 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.746639 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-conf\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746666 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs podName:fd8ee636-b6e8-4caf-bf47-8356cf3974a5 nodeName:}" failed. No retries permitted until 2026-01-26 23:21:33.2466486 +0000 UTC m=+797.411356065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs") pod "controller-6968d8fdc4-hp8cv" (UID: "fd8ee636-b6e8-4caf-bf47-8356cf3974a5") : secret "controller-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746776 4995 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.746820 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist podName:4768de9d-be12-4b0b-9bd1-03f127a1a557 nodeName:}" failed. No retries permitted until 2026-01-26 23:21:33.246803104 +0000 UTC m=+797.411510569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist") pod "speaker-jlkxq" (UID: "4768de9d-be12-4b0b-9bd1-03f127a1a557") : secret "metallb-memberlist" not found Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747115 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-reloader\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747156 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-sockets\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.747445 4995 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: E0126 23:21:32.747493 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs podName:11187758-87a2-4879-8421-5d9cdc4fd8bd nodeName:}" failed. No retries permitted until 2026-01-26 23:21:33.247481611 +0000 UTC m=+797.412189086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs") pod "frr-k8s-lt5dg" (UID: "11187758-87a2-4879-8421-5d9cdc4fd8bd") : secret "frr-k8s-certs-secret" not found Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11187758-87a2-4879-8421-5d9cdc4fd8bd-frr-startup\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.747683 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4768de9d-be12-4b0b-9bd1-03f127a1a557-metallb-excludel2\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.749362 4995 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.751758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-metrics-certs\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.760272 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-cert\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.764925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxtth\" (UniqueName: \"kubernetes.io/projected/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-kube-api-access-zxtth\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.765947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snck7\" (UniqueName: \"kubernetes.io/projected/11187758-87a2-4879-8421-5d9cdc4fd8bd-kube-api-access-snck7\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.768855 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmz9\" (UniqueName: \"kubernetes.io/projected/4768de9d-be12-4b0b-9bd1-03f127a1a557-kube-api-access-pxmz9\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:32 crc kubenswrapper[4995]: I0126 23:21:32.798335 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.191190 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9"] Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.253646 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.254350 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.254425 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:33 crc kubenswrapper[4995]: E0126 23:21:33.254692 4995 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 23:21:33 crc kubenswrapper[4995]: E0126 23:21:33.254793 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist podName:4768de9d-be12-4b0b-9bd1-03f127a1a557 nodeName:}" failed. No retries permitted until 2026-01-26 23:21:34.254763492 +0000 UTC m=+798.419470997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist") pod "speaker-jlkxq" (UID: "4768de9d-be12-4b0b-9bd1-03f127a1a557") : secret "metallb-memberlist" not found Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.259085 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11187758-87a2-4879-8421-5d9cdc4fd8bd-metrics-certs\") pod \"frr-k8s-lt5dg\" (UID: \"11187758-87a2-4879-8421-5d9cdc4fd8bd\") " pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.259992 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fd8ee636-b6e8-4caf-bf47-8356cf3974a5-metrics-certs\") pod \"controller-6968d8fdc4-hp8cv\" (UID: \"fd8ee636-b6e8-4caf-bf47-8356cf3974a5\") " pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.420707 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.467339 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.651060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"6d09303f45eea82f4dc7eee094e0db82b1bb5b23a501308eea0d9f41ad68522c"} Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.653885 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" event={"ID":"d71dd2bc-e8c9-4a37-9096-35a1f19333f8","Type":"ContainerStarted","Data":"248cdc3a4e2762b712ea55242aec0c2e031dda42c780e7ec609ace26fed35255"} Jan 26 23:21:33 crc kubenswrapper[4995]: I0126 23:21:33.724250 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-hp8cv"] Jan 26 23:21:33 crc kubenswrapper[4995]: W0126 23:21:33.730018 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd8ee636_b6e8_4caf_bf47_8356cf3974a5.slice/crio-2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8 WatchSource:0}: Error finding container 2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8: Status 404 returned error can't find the container with id 2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8 Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.275129 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.280202 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4768de9d-be12-4b0b-9bd1-03f127a1a557-memberlist\") pod \"speaker-jlkxq\" (UID: \"4768de9d-be12-4b0b-9bd1-03f127a1a557\") " pod="metallb-system/speaker-jlkxq" Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.352758 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:34 crc kubenswrapper[4995]: W0126 23:21:34.368837 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4768de9d_be12_4b0b_9bd1_03f127a1a557.slice/crio-af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7 WatchSource:0}: Error finding container af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7: Status 404 returned error can't find the container with id af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7 Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.666371 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hp8cv" event={"ID":"fd8ee636-b6e8-4caf-bf47-8356cf3974a5","Type":"ContainerStarted","Data":"628754c7d459e4437059941126e0dbbba3dd61c0d62972731dd306862564f1fe"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.666429 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hp8cv" event={"ID":"fd8ee636-b6e8-4caf-bf47-8356cf3974a5","Type":"ContainerStarted","Data":"6f1aef8e072fbdcb86c9f70a6275938598495efca734ea779b5601b59454b35f"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.666441 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-hp8cv" event={"ID":"fd8ee636-b6e8-4caf-bf47-8356cf3974a5","Type":"ContainerStarted","Data":"2d4eeece29017e0b8624d483756d64fb9716c658400d7a062ae83f710c3714b8"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.667241 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.669280 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jlkxq" event={"ID":"4768de9d-be12-4b0b-9bd1-03f127a1a557","Type":"ContainerStarted","Data":"ab27940cb234b8781beeeea3062a707b78313c17ebcc25eb27a6166659261441"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.669323 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jlkxq" event={"ID":"4768de9d-be12-4b0b-9bd1-03f127a1a557","Type":"ContainerStarted","Data":"af605d46280a8bae8f42396ed1d3365b04e4fc3d36366ed9bde4fe5d607918e7"} Jan 26 23:21:34 crc kubenswrapper[4995]: I0126 23:21:34.698193 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-hp8cv" podStartSLOduration=2.69816624 podStartE2EDuration="2.69816624s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:21:34.688268919 +0000 UTC m=+798.852976394" watchObservedRunningTime="2026-01-26 23:21:34.69816624 +0000 UTC m=+798.862873705" Jan 26 23:21:35 crc kubenswrapper[4995]: I0126 23:21:35.692688 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-jlkxq" event={"ID":"4768de9d-be12-4b0b-9bd1-03f127a1a557","Type":"ContainerStarted","Data":"689c97cf6faa9b2af984dfff69c4b1359663537d4113d60e2ce71bc9ad2e5e70"} Jan 26 23:21:35 crc kubenswrapper[4995]: I0126 23:21:35.716878 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-jlkxq" podStartSLOduration=3.716857918 podStartE2EDuration="3.716857918s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:21:35.712765315 +0000 UTC m=+799.877472790" watchObservedRunningTime="2026-01-26 23:21:35.716857918 +0000 UTC m=+799.881565373" Jan 26 23:21:36 crc kubenswrapper[4995]: I0126 23:21:36.706766 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.736289 4995 generic.go:334] "Generic (PLEG): container finished" podID="11187758-87a2-4879-8421-5d9cdc4fd8bd" containerID="feab391947619bd0d9a3e71925298a7c291add25c95e7e12d054b0599bfa6837" exitCode=0 Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.736372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerDied","Data":"feab391947619bd0d9a3e71925298a7c291add25c95e7e12d054b0599bfa6837"} Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.740157 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" event={"ID":"d71dd2bc-e8c9-4a37-9096-35a1f19333f8","Type":"ContainerStarted","Data":"2f89c00262b70796eb9cb03c5a330ba1e94e2c61961cb995b32e97c6db6b1925"} Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.740300 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.785368 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" podStartSLOduration=1.548354907 podStartE2EDuration="8.785347066s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="2026-01-26 23:21:33.205642358 +0000 UTC m=+797.370349823" lastFinishedPulling="2026-01-26 23:21:40.442634517 +0000 UTC m=+804.607341982" observedRunningTime="2026-01-26 23:21:40.783999942 +0000 UTC m=+804.948707467" watchObservedRunningTime="2026-01-26 23:21:40.785347066 +0000 UTC m=+804.950054551" Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.893660 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:21:40 crc kubenswrapper[4995]: I0126 23:21:40.893735 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:21:41 crc kubenswrapper[4995]: I0126 23:21:41.751054 4995 generic.go:334] "Generic (PLEG): container finished" podID="11187758-87a2-4879-8421-5d9cdc4fd8bd" containerID="e2de9b23e67dc3791e4411a0d1d17c652ccf78e323a6381f1fe611e6be1880d9" exitCode=0 Jan 26 23:21:41 crc kubenswrapper[4995]: I0126 23:21:41.751198 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerDied","Data":"e2de9b23e67dc3791e4411a0d1d17c652ccf78e323a6381f1fe611e6be1880d9"} Jan 26 23:21:42 crc kubenswrapper[4995]: I0126 23:21:42.764385 4995 generic.go:334] "Generic (PLEG): container finished" podID="11187758-87a2-4879-8421-5d9cdc4fd8bd" containerID="086bc9d9d39cf4928d6979ce48066ffd786a42b1fde5d217a55a3708fcefb6ce" exitCode=0 Jan 26 23:21:42 crc kubenswrapper[4995]: I0126 23:21:42.764454 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerDied","Data":"086bc9d9d39cf4928d6979ce48066ffd786a42b1fde5d217a55a3708fcefb6ce"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.471208 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-hp8cv" Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776592 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"4326cffa9ffdb13b8a22e91a81bab58e12df6df32e6be86bb959e23bdc5daf5a"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776637 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"cf41880edaa3f06d5b0f184600bd762db9b7dc85c86c6fc6ab701ba773608423"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776650 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"5ac09632dabf3319e20fd304617b7a931601e7696cbe87ebf95e49e185b1cf7c"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776662 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"855c410e2e10ec3f2d3970b09fce8fdbce9eef3c80a6a03d86784889697b689f"} Jan 26 23:21:43 crc kubenswrapper[4995]: I0126 23:21:43.776673 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"5637b1909725ba03f4d3d3420c6a75ad43bcb6a19fe53ddb4e6ff616c2a287a9"} Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.357993 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-jlkxq" Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.787507 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-lt5dg" event={"ID":"11187758-87a2-4879-8421-5d9cdc4fd8bd","Type":"ContainerStarted","Data":"8cf5ac4b80885eea33a6e9bb209ce8c1443c374f1acf06c6ba0320c6203072d5"} Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.788221 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:44 crc kubenswrapper[4995]: I0126 23:21:44.811727 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-lt5dg" podStartSLOduration=5.997229614 podStartE2EDuration="12.811710472s" podCreationTimestamp="2026-01-26 23:21:32 +0000 UTC" firstStartedPulling="2026-01-26 23:21:33.599429281 +0000 UTC m=+797.764136786" lastFinishedPulling="2026-01-26 23:21:40.413910179 +0000 UTC m=+804.578617644" observedRunningTime="2026-01-26 23:21:44.811475987 +0000 UTC m=+808.976183472" watchObservedRunningTime="2026-01-26 23:21:44.811710472 +0000 UTC m=+808.976417937" Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.867399 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8"] Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.868807 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.877529 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 23:21:45 crc kubenswrapper[4995]: I0126 23:21:45.877600 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8"] Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.065033 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.065213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.065243 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166377 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166476 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.166996 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.167143 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.188563 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.272381 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.514434 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8"] Jan 26 23:21:46 crc kubenswrapper[4995]: W0126 23:21:46.515061 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2fc70c8_babd_496e_8d1c_acd82bb98901.slice/crio-1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39 WatchSource:0}: Error finding container 1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39: Status 404 returned error can't find the container with id 1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39 Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.801130 4995 generic.go:334] "Generic (PLEG): container finished" podID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerID="b4ec2a215429da042d9b72336fc0b2946bffca22fe46351c2e9d2bd1313d641e" exitCode=0 Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.801199 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"b4ec2a215429da042d9b72336fc0b2946bffca22fe46351c2e9d2bd1313d641e"} Jan 26 23:21:46 crc kubenswrapper[4995]: I0126 23:21:46.801229 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerStarted","Data":"1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39"} Jan 26 23:21:48 crc kubenswrapper[4995]: I0126 23:21:48.421367 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:48 crc kubenswrapper[4995]: I0126 23:21:48.530468 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:50 crc kubenswrapper[4995]: I0126 23:21:50.971694 4995 generic.go:334] "Generic (PLEG): container finished" podID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerID="d5342cfdcf66dca9b448cad05d76b2baca8cafb86cfa4cdba377ec0f2d5d6127" exitCode=0 Jan 26 23:21:50 crc kubenswrapper[4995]: I0126 23:21:50.971828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"d5342cfdcf66dca9b448cad05d76b2baca8cafb86cfa4cdba377ec0f2d5d6127"} Jan 26 23:21:51 crc kubenswrapper[4995]: I0126 23:21:51.985219 4995 generic.go:334] "Generic (PLEG): container finished" podID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerID="028ea523146713c5f53bec59ed2513db8b08261ea099e88966ffd7dee26b9fc6" exitCode=0 Jan 26 23:21:51 crc kubenswrapper[4995]: I0126 23:21:51.985300 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"028ea523146713c5f53bec59ed2513db8b08261ea099e88966ffd7dee26b9fc6"} Jan 26 23:21:52 crc kubenswrapper[4995]: I0126 23:21:52.803492 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bqkf9" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.310163 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.424134 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-lt5dg" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.499843 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") pod \"a2fc70c8-babd-496e-8d1c-acd82bb98901\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.499897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") pod \"a2fc70c8-babd-496e-8d1c-acd82bb98901\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.499960 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") pod \"a2fc70c8-babd-496e-8d1c-acd82bb98901\" (UID: \"a2fc70c8-babd-496e-8d1c-acd82bb98901\") " Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.501490 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle" (OuterVolumeSpecName: "bundle") pod "a2fc70c8-babd-496e-8d1c-acd82bb98901" (UID: "a2fc70c8-babd-496e-8d1c-acd82bb98901"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.506387 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr" (OuterVolumeSpecName: "kube-api-access-ts2kr") pod "a2fc70c8-babd-496e-8d1c-acd82bb98901" (UID: "a2fc70c8-babd-496e-8d1c-acd82bb98901"). InnerVolumeSpecName "kube-api-access-ts2kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.516848 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util" (OuterVolumeSpecName: "util") pod "a2fc70c8-babd-496e-8d1c-acd82bb98901" (UID: "a2fc70c8-babd-496e-8d1c-acd82bb98901"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.601378 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.601412 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2fc70c8-babd-496e-8d1c-acd82bb98901-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:53 crc kubenswrapper[4995]: I0126 23:21:53.601422 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts2kr\" (UniqueName: \"kubernetes.io/projected/a2fc70c8-babd-496e-8d1c-acd82bb98901-kube-api-access-ts2kr\") on node \"crc\" DevicePath \"\"" Jan 26 23:21:54 crc kubenswrapper[4995]: I0126 23:21:54.000823 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" event={"ID":"a2fc70c8-babd-496e-8d1c-acd82bb98901","Type":"ContainerDied","Data":"1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39"} Jan 26 23:21:54 crc kubenswrapper[4995]: I0126 23:21:54.000899 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8" Jan 26 23:21:54 crc kubenswrapper[4995]: I0126 23:21:54.000911 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8d9645957e68a7aac9bf15e87c972880983c35e81d57598a6dcf2e7c1f8c39" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.018405 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5"] Jan 26 23:22:00 crc kubenswrapper[4995]: E0126 23:22:00.019302 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="extract" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019323 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="extract" Jan 26 23:22:00 crc kubenswrapper[4995]: E0126 23:22:00.019350 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="pull" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019363 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="pull" Jan 26 23:22:00 crc kubenswrapper[4995]: E0126 23:22:00.019388 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="util" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019401 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="util" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.019612 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fc70c8-babd-496e-8d1c-acd82bb98901" containerName="extract" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.020335 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.023616 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.024584 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-srptp" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.024797 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.042430 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5"] Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.189317 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e647402-f342-4296-a09b-512075e3d867-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.189396 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25djz\" (UniqueName: \"kubernetes.io/projected/6e647402-f342-4296-a09b-512075e3d867-kube-api-access-25djz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.290855 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e647402-f342-4296-a09b-512075e3d867-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.291403 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25djz\" (UniqueName: \"kubernetes.io/projected/6e647402-f342-4296-a09b-512075e3d867-kube-api-access-25djz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.291582 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e647402-f342-4296-a09b-512075e3d867-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.327486 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25djz\" (UniqueName: \"kubernetes.io/projected/6e647402-f342-4296-a09b-512075e3d867-kube-api-access-25djz\") pod \"cert-manager-operator-controller-manager-64cf6dff88-cdrg5\" (UID: \"6e647402-f342-4296-a09b-512075e3d867\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.365132 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" Jan 26 23:22:00 crc kubenswrapper[4995]: I0126 23:22:00.843523 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5"] Jan 26 23:22:01 crc kubenswrapper[4995]: I0126 23:22:01.078190 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" event={"ID":"6e647402-f342-4296-a09b-512075e3d867","Type":"ContainerStarted","Data":"ee9c0f5f7e15a6a13254242853fb979fca91eb29f8a979aec205009931daeac9"} Jan 26 23:22:09 crc kubenswrapper[4995]: I0126 23:22:09.137351 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" event={"ID":"6e647402-f342-4296-a09b-512075e3d867","Type":"ContainerStarted","Data":"f608b058288c4c97ca1f6c54a5135829294ba12fab1de8d9acfeaabbfe482882"} Jan 26 23:22:09 crc kubenswrapper[4995]: I0126 23:22:09.168429 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-cdrg5" podStartSLOduration=2.9798194430000002 podStartE2EDuration="10.168404087s" podCreationTimestamp="2026-01-26 23:21:59 +0000 UTC" firstStartedPulling="2026-01-26 23:22:00.848777106 +0000 UTC m=+825.013484581" lastFinishedPulling="2026-01-26 23:22:08.03736176 +0000 UTC m=+832.202069225" observedRunningTime="2026-01-26 23:22:09.164323853 +0000 UTC m=+833.329031358" watchObservedRunningTime="2026-01-26 23:22:09.168404087 +0000 UTC m=+833.333111592" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.893340 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.893412 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.893461 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.894046 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:22:10 crc kubenswrapper[4995]: I0126 23:22:10.894125 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46" gracePeriod=600 Jan 26 23:22:11 crc kubenswrapper[4995]: I0126 23:22:11.159190 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46" exitCode=0 Jan 26 23:22:11 crc kubenswrapper[4995]: I0126 23:22:11.159536 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46"} Jan 26 23:22:11 crc kubenswrapper[4995]: I0126 23:22:11.159735 4995 scope.go:117] "RemoveContainer" containerID="e7586fc74dcbd4a07d6a21db761bf1e0053c5b99f541975fd3e7df1c8ddea8ab" Jan 26 23:22:12 crc kubenswrapper[4995]: I0126 23:22:12.172579 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576"} Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.071199 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-g88s9"] Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.072242 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.074281 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.074456 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2p4ql" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.074857 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.083244 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-g88s9"] Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.170063 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4v2p\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-kube-api-access-n4v2p\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.170298 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.271705 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.272494 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4v2p\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-kube-api-access-n4v2p\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.301747 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.305288 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4v2p\" (UniqueName: \"kubernetes.io/projected/5cf25cae-f1af-44e4-a613-be45044cf998-kube-api-access-n4v2p\") pod \"cert-manager-webhook-f4fb5df64-g88s9\" (UID: \"5cf25cae-f1af-44e4-a613-be45044cf998\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.387157 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:13 crc kubenswrapper[4995]: I0126 23:22:13.874910 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-g88s9"] Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.099082 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4"] Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.100086 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.102794 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2snmb" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.132441 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4"] Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.183746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" event={"ID":"5cf25cae-f1af-44e4-a613-be45044cf998","Type":"ContainerStarted","Data":"5afc86536bd5c9cafd19909a39f93dccfd2bcef22237386b7d1c92dc5fe258ec"} Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.284336 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-869rp\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-kube-api-access-869rp\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.284386 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.385371 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-869rp\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-kube-api-access-869rp\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.385414 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.407577 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.415717 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-869rp\" (UniqueName: \"kubernetes.io/projected/10b23efd-9250-469e-8bce-4f31c05d1470-kube-api-access-869rp\") pod \"cert-manager-cainjector-855d9ccff4-hjbt4\" (UID: \"10b23efd-9250-469e-8bce-4f31c05d1470\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.418515 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" Jan 26 23:22:14 crc kubenswrapper[4995]: I0126 23:22:14.700190 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4"] Jan 26 23:22:14 crc kubenswrapper[4995]: W0126 23:22:14.708875 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10b23efd_9250_469e_8bce_4f31c05d1470.slice/crio-2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d WatchSource:0}: Error finding container 2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d: Status 404 returned error can't find the container with id 2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d Jan 26 23:22:15 crc kubenswrapper[4995]: I0126 23:22:15.196149 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" event={"ID":"10b23efd-9250-469e-8bce-4f31c05d1470","Type":"ContainerStarted","Data":"2e77b2a571ff46fb79144257c16e1b4dcc2452b588bb7f1e7e061812c233798d"} Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.239593 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" event={"ID":"5cf25cae-f1af-44e4-a613-be45044cf998","Type":"ContainerStarted","Data":"24868142b6a95d8a94769cea5efaa3c1b006a9945f7752c5d692ca07ed3bb462"} Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.241200 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.242488 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" event={"ID":"10b23efd-9250-469e-8bce-4f31c05d1470","Type":"ContainerStarted","Data":"60894095c7fddc6411cc043ab51dc546613a7013bb3c3706d72cb841f6af4957"} Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.257183 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" podStartSLOduration=1.334659658 podStartE2EDuration="9.257169909s" podCreationTimestamp="2026-01-26 23:22:13 +0000 UTC" firstStartedPulling="2026-01-26 23:22:13.881992189 +0000 UTC m=+838.046699664" lastFinishedPulling="2026-01-26 23:22:21.80450244 +0000 UTC m=+845.969209915" observedRunningTime="2026-01-26 23:22:22.255243771 +0000 UTC m=+846.419951236" watchObservedRunningTime="2026-01-26 23:22:22.257169909 +0000 UTC m=+846.421877374" Jan 26 23:22:22 crc kubenswrapper[4995]: I0126 23:22:22.275012 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-hjbt4" podStartSLOduration=1.180109136 podStartE2EDuration="8.274991413s" podCreationTimestamp="2026-01-26 23:22:14 +0000 UTC" firstStartedPulling="2026-01-26 23:22:14.710008233 +0000 UTC m=+838.874715698" lastFinishedPulling="2026-01-26 23:22:21.80489051 +0000 UTC m=+845.969597975" observedRunningTime="2026-01-26 23:22:22.269601059 +0000 UTC m=+846.434308514" watchObservedRunningTime="2026-01-26 23:22:22.274991413 +0000 UTC m=+846.439698878" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.190778 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4g78v"] Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.191527 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.202230 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4g78v"] Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.205879 4995 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-ssvqt" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.313528 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kmrp\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-kube-api-access-9kmrp\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.313583 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-bound-sa-token\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.415179 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kmrp\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-kube-api-access-9kmrp\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.415255 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-bound-sa-token\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.454630 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-bound-sa-token\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.454720 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kmrp\" (UniqueName: \"kubernetes.io/projected/0ea05f4b-1373-4e08-9d78-e214b84cdc79-kube-api-access-9kmrp\") pod \"cert-manager-86cb77c54b-4g78v\" (UID: \"0ea05f4b-1373-4e08-9d78-e214b84cdc79\") " pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.505346 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-4g78v" Jan 26 23:22:23 crc kubenswrapper[4995]: I0126 23:22:23.758155 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-4g78v"] Jan 26 23:22:23 crc kubenswrapper[4995]: W0126 23:22:23.765036 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ea05f4b_1373_4e08_9d78_e214b84cdc79.slice/crio-ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9 WatchSource:0}: Error finding container ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9: Status 404 returned error can't find the container with id ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9 Jan 26 23:22:24 crc kubenswrapper[4995]: I0126 23:22:24.270325 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4g78v" event={"ID":"0ea05f4b-1373-4e08-9d78-e214b84cdc79","Type":"ContainerStarted","Data":"9bda28cc3e670ebe44d2aed7bf634fc7a1d21e4239dacc53931b047119321132"} Jan 26 23:22:24 crc kubenswrapper[4995]: I0126 23:22:24.270396 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-4g78v" event={"ID":"0ea05f4b-1373-4e08-9d78-e214b84cdc79","Type":"ContainerStarted","Data":"ea590547d8f06db32c041350ba04b757de8614e46fc2442ce4e72e1d3ef9f3b9"} Jan 26 23:22:24 crc kubenswrapper[4995]: I0126 23:22:24.292854 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-4g78v" podStartSLOduration=1.292833219 podStartE2EDuration="1.292833219s" podCreationTimestamp="2026-01-26 23:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:22:24.28287115 +0000 UTC m=+848.447578615" watchObservedRunningTime="2026-01-26 23:22:24.292833219 +0000 UTC m=+848.457540684" Jan 26 23:22:28 crc kubenswrapper[4995]: I0126 23:22:28.391384 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-g88s9" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.690103 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.692304 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.698388 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.698507 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-j75mr" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.698762 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.701706 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.831090 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"openstack-operator-index-fzjhg\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.932520 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"openstack-operator-index-fzjhg\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:31 crc kubenswrapper[4995]: I0126 23:22:31.952634 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"openstack-operator-index-fzjhg\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:32 crc kubenswrapper[4995]: I0126 23:22:32.067726 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:32 crc kubenswrapper[4995]: I0126 23:22:32.463456 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:33 crc kubenswrapper[4995]: I0126 23:22:33.342128 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerStarted","Data":"aeca02f391a8dd03ed2ab5137c9d680b8ac5c3e8670fba3b9525f1ab0df35c29"} Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.055776 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.354974 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerStarted","Data":"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be"} Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.355112 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-fzjhg" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" containerID="cri-o://39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" gracePeriod=2 Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.369683 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-fzjhg" podStartSLOduration=1.963383484 podStartE2EDuration="4.369664461s" podCreationTimestamp="2026-01-26 23:22:31 +0000 UTC" firstStartedPulling="2026-01-26 23:22:32.477442359 +0000 UTC m=+856.642149824" lastFinishedPulling="2026-01-26 23:22:34.883723326 +0000 UTC m=+859.048430801" observedRunningTime="2026-01-26 23:22:35.369199099 +0000 UTC m=+859.533906564" watchObservedRunningTime="2026-01-26 23:22:35.369664461 +0000 UTC m=+859.534371926" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.668413 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-z9fdb"] Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.669428 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.684324 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z9fdb"] Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.757225 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.788726 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkx5q\" (UniqueName: \"kubernetes.io/projected/ca183057-4337-4dfb-a5ec-e8945fe74cca-kube-api-access-wkx5q\") pod \"openstack-operator-index-z9fdb\" (UID: \"ca183057-4337-4dfb-a5ec-e8945fe74cca\") " pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.890213 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") pod \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\" (UID: \"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1\") " Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.890467 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkx5q\" (UniqueName: \"kubernetes.io/projected/ca183057-4337-4dfb-a5ec-e8945fe74cca-kube-api-access-wkx5q\") pod \"openstack-operator-index-z9fdb\" (UID: \"ca183057-4337-4dfb-a5ec-e8945fe74cca\") " pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.899965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd" (OuterVolumeSpecName: "kube-api-access-brjwd") pod "66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" (UID: "66b955f3-66bb-42fe-b7de-e5a07bbc4bd1"). InnerVolumeSpecName "kube-api-access-brjwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.915424 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkx5q\" (UniqueName: \"kubernetes.io/projected/ca183057-4337-4dfb-a5ec-e8945fe74cca-kube-api-access-wkx5q\") pod \"openstack-operator-index-z9fdb\" (UID: \"ca183057-4337-4dfb-a5ec-e8945fe74cca\") " pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.992028 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brjwd\" (UniqueName: \"kubernetes.io/projected/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1-kube-api-access-brjwd\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:35 crc kubenswrapper[4995]: I0126 23:22:35.992244 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.256559 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-z9fdb"] Jan 26 23:22:36 crc kubenswrapper[4995]: W0126 23:22:36.268779 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca183057_4337_4dfb_a5ec_e8945fe74cca.slice/crio-c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247 WatchSource:0}: Error finding container c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247: Status 404 returned error can't find the container with id c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247 Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.390288 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z9fdb" event={"ID":"ca183057-4337-4dfb-a5ec-e8945fe74cca","Type":"ContainerStarted","Data":"c85224bebdeebdf5990b8931fe233be1d8d5cd1955ee5290554d3f6882c27247"} Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395589 4995 generic.go:334] "Generic (PLEG): container finished" podID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" exitCode=0 Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerDied","Data":"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be"} Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-fzjhg" event={"ID":"66b955f3-66bb-42fe-b7de-e5a07bbc4bd1","Type":"ContainerDied","Data":"aeca02f391a8dd03ed2ab5137c9d680b8ac5c3e8670fba3b9525f1ab0df35c29"} Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395690 4995 scope.go:117] "RemoveContainer" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.395808 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-fzjhg" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.457000 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.463682 4995 scope.go:117] "RemoveContainer" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" Jan 26 23:22:36 crc kubenswrapper[4995]: E0126 23:22:36.464315 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be\": container with ID starting with 39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be not found: ID does not exist" containerID="39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.464343 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be"} err="failed to get container status \"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be\": rpc error: code = NotFound desc = could not find container \"39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be\": container with ID starting with 39f49022c443cbaee598d9a5a1d1adccef0e7e2831059d9bcfdb8fac87b913be not found: ID does not exist" Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.468076 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-fzjhg"] Jan 26 23:22:36 crc kubenswrapper[4995]: I0126 23:22:36.525960 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" path="/var/lib/kubelet/pods/66b955f3-66bb-42fe-b7de-e5a07bbc4bd1/volumes" Jan 26 23:22:37 crc kubenswrapper[4995]: I0126 23:22:37.404582 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-z9fdb" event={"ID":"ca183057-4337-4dfb-a5ec-e8945fe74cca","Type":"ContainerStarted","Data":"a5a2cd6aba9978870b845040b93b52033cbb36f75ecea9df7f7bb74684c28918"} Jan 26 23:22:37 crc kubenswrapper[4995]: I0126 23:22:37.425442 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-z9fdb" podStartSLOduration=2.372414801 podStartE2EDuration="2.425424063s" podCreationTimestamp="2026-01-26 23:22:35 +0000 UTC" firstStartedPulling="2026-01-26 23:22:36.274810235 +0000 UTC m=+860.439517750" lastFinishedPulling="2026-01-26 23:22:36.327819537 +0000 UTC m=+860.492527012" observedRunningTime="2026-01-26 23:22:37.420616123 +0000 UTC m=+861.585323598" watchObservedRunningTime="2026-01-26 23:22:37.425424063 +0000 UTC m=+861.590131528" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.277597 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:43 crc kubenswrapper[4995]: E0126 23:22:43.278582 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.278617 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.278926 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b955f3-66bb-42fe-b7de-e5a07bbc4bd1" containerName="registry-server" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.281053 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.288076 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.399823 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.399896 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.399930 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.501581 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.501646 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.501685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.502051 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.502164 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.521364 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"community-operators-lhq9l\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.626873 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:43 crc kubenswrapper[4995]: I0126 23:22:43.910623 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:44 crc kubenswrapper[4995]: I0126 23:22:44.458203 4995 generic.go:334] "Generic (PLEG): container finished" podID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" exitCode=0 Jan 26 23:22:44 crc kubenswrapper[4995]: I0126 23:22:44.458340 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416"} Jan 26 23:22:44 crc kubenswrapper[4995]: I0126 23:22:44.459371 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerStarted","Data":"33dffb9a73e1025af441a109e2fb4fb29fb662cdba9a46e660ae5dca52813904"} Jan 26 23:22:45 crc kubenswrapper[4995]: I0126 23:22:45.466539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerStarted","Data":"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869"} Jan 26 23:22:45 crc kubenswrapper[4995]: I0126 23:22:45.993008 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:45 crc kubenswrapper[4995]: I0126 23:22:45.993451 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.033025 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.480344 4995 generic.go:334] "Generic (PLEG): container finished" podID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" exitCode=0 Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.481289 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869"} Jan 26 23:22:46 crc kubenswrapper[4995]: I0126 23:22:46.534033 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-z9fdb" Jan 26 23:22:47 crc kubenswrapper[4995]: I0126 23:22:47.490561 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerStarted","Data":"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1"} Jan 26 23:22:47 crc kubenswrapper[4995]: I0126 23:22:47.515873 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lhq9l" podStartSLOduration=2.085659349 podStartE2EDuration="4.515850853s" podCreationTimestamp="2026-01-26 23:22:43 +0000 UTC" firstStartedPulling="2026-01-26 23:22:44.460318487 +0000 UTC m=+868.625025952" lastFinishedPulling="2026-01-26 23:22:46.890509951 +0000 UTC m=+871.055217456" observedRunningTime="2026-01-26 23:22:47.510888249 +0000 UTC m=+871.675595744" watchObservedRunningTime="2026-01-26 23:22:47.515850853 +0000 UTC m=+871.680558338" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.137130 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc"] Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.142020 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.145571 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jtm6l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.151399 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc"] Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.249520 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.250060 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.250172 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.352166 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.352272 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.352391 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.353003 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.353244 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.383267 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.483981 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.627495 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.629329 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.702521 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:53 crc kubenswrapper[4995]: I0126 23:22:53.740911 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc"] Jan 26 23:22:53 crc kubenswrapper[4995]: W0126 23:22:53.753958 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cbffe6c_1d98_4769_8f02_7a966a63ef38.slice/crio-ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab WatchSource:0}: Error finding container ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab: Status 404 returned error can't find the container with id ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.557259 4995 generic.go:334] "Generic (PLEG): container finished" podID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerID="0f8c0e69350ba0b54b859b17faf545edf4216ff04a06e1db2646a7a546558883" exitCode=0 Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.559215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"0f8c0e69350ba0b54b859b17faf545edf4216ff04a06e1db2646a7a546558883"} Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.559262 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerStarted","Data":"ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab"} Jan 26 23:22:54 crc kubenswrapper[4995]: I0126 23:22:54.614496 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:55 crc kubenswrapper[4995]: I0126 23:22:55.568128 4995 generic.go:334] "Generic (PLEG): container finished" podID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerID="775ecca36989c1c8a8c882b9b28e779fd41acf55a8dd843512e2e45dfe1810d8" exitCode=0 Jan 26 23:22:55 crc kubenswrapper[4995]: I0126 23:22:55.568215 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"775ecca36989c1c8a8c882b9b28e779fd41acf55a8dd843512e2e45dfe1810d8"} Jan 26 23:22:56 crc kubenswrapper[4995]: I0126 23:22:56.581300 4995 generic.go:334] "Generic (PLEG): container finished" podID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerID="132c6002305c662a0517ffe743dbadb9323be0431c2c4b18edf8f66b79d2edef" exitCode=0 Jan 26 23:22:56 crc kubenswrapper[4995]: I0126 23:22:56.581464 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"132c6002305c662a0517ffe743dbadb9323be0431c2c4b18edf8f66b79d2edef"} Jan 26 23:22:56 crc kubenswrapper[4995]: I0126 23:22:56.660131 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.586997 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lhq9l" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" containerID="cri-o://3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" gracePeriod=2 Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.863681 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.941192 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") pod \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.942179 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle" (OuterVolumeSpecName: "bundle") pod "1cbffe6c-1d98-4769-8f02-7a966a63ef38" (UID: "1cbffe6c-1d98-4769-8f02-7a966a63ef38"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.942236 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") pod \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.942333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") pod \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\" (UID: \"1cbffe6c-1d98-4769-8f02-7a966a63ef38\") " Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.951077 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.951574 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt" (OuterVolumeSpecName: "kube-api-access-gsqmt") pod "1cbffe6c-1d98-4769-8f02-7a966a63ef38" (UID: "1cbffe6c-1d98-4769-8f02-7a966a63ef38"). InnerVolumeSpecName "kube-api-access-gsqmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.959471 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util" (OuterVolumeSpecName: "util") pod "1cbffe6c-1d98-4769-8f02-7a966a63ef38" (UID: "1cbffe6c-1d98-4769-8f02-7a966a63ef38"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:57 crc kubenswrapper[4995]: I0126 23:22:57.995397 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.052308 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") pod \"3d2e5512-c49b-4082-a5b0-44df42a443ee\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.052687 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") pod \"3d2e5512-c49b-4082-a5b0-44df42a443ee\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.052983 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") pod \"3d2e5512-c49b-4082-a5b0-44df42a443ee\" (UID: \"3d2e5512-c49b-4082-a5b0-44df42a443ee\") " Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.053527 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cbffe6c-1d98-4769-8f02-7a966a63ef38-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.054093 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsqmt\" (UniqueName: \"kubernetes.io/projected/1cbffe6c-1d98-4769-8f02-7a966a63ef38-kube-api-access-gsqmt\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.058358 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct" (OuterVolumeSpecName: "kube-api-access-2l9ct") pod "3d2e5512-c49b-4082-a5b0-44df42a443ee" (UID: "3d2e5512-c49b-4082-a5b0-44df42a443ee"). InnerVolumeSpecName "kube-api-access-2l9ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.059054 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities" (OuterVolumeSpecName: "utilities") pod "3d2e5512-c49b-4082-a5b0-44df42a443ee" (UID: "3d2e5512-c49b-4082-a5b0-44df42a443ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.125843 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d2e5512-c49b-4082-a5b0-44df42a443ee" (UID: "3d2e5512-c49b-4082-a5b0-44df42a443ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.158968 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.159005 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d2e5512-c49b-4082-a5b0-44df42a443ee-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.159015 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9ct\" (UniqueName: \"kubernetes.io/projected/3d2e5512-c49b-4082-a5b0-44df42a443ee-kube-api-access-2l9ct\") on node \"crc\" DevicePath \"\"" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597587 4995 generic.go:334] "Generic (PLEG): container finished" podID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" exitCode=0 Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597670 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1"} Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597709 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lhq9l" event={"ID":"3d2e5512-c49b-4082-a5b0-44df42a443ee","Type":"ContainerDied","Data":"33dffb9a73e1025af441a109e2fb4fb29fb662cdba9a46e660ae5dca52813904"} Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597732 4995 scope.go:117] "RemoveContainer" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.597742 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lhq9l" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.604421 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" event={"ID":"1cbffe6c-1d98-4769-8f02-7a966a63ef38","Type":"ContainerDied","Data":"ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab"} Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.604476 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea5b06b7ac5626f340c0034b3ae34ed6b029b48200466d5c2b5046af454254ab" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.604566 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.631352 4995 scope.go:117] "RemoveContainer" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.631475 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.634897 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lhq9l"] Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.652520 4995 scope.go:117] "RemoveContainer" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.677493 4995 scope.go:117] "RemoveContainer" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" Jan 26 23:22:58 crc kubenswrapper[4995]: E0126 23:22:58.678133 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1\": container with ID starting with 3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1 not found: ID does not exist" containerID="3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.678346 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1"} err="failed to get container status \"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1\": rpc error: code = NotFound desc = could not find container \"3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1\": container with ID starting with 3d046f2378e14e6f49e23855f69ef888d24fdb55379c939c748196d40b3205b1 not found: ID does not exist" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.678497 4995 scope.go:117] "RemoveContainer" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" Jan 26 23:22:58 crc kubenswrapper[4995]: E0126 23:22:58.679151 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869\": container with ID starting with 0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869 not found: ID does not exist" containerID="0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.679238 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869"} err="failed to get container status \"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869\": rpc error: code = NotFound desc = could not find container \"0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869\": container with ID starting with 0074c64a10c1d994cfa5eb0b513b1d05db979b1aead792364c748a1587b7f869 not found: ID does not exist" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.679305 4995 scope.go:117] "RemoveContainer" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" Jan 26 23:22:58 crc kubenswrapper[4995]: E0126 23:22:58.679584 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416\": container with ID starting with 16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416 not found: ID does not exist" containerID="16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416" Jan 26 23:22:58 crc kubenswrapper[4995]: I0126 23:22:58.679732 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416"} err="failed to get container status \"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416\": rpc error: code = NotFound desc = could not find container \"16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416\": container with ID starting with 16c598ac1f3945c0dd9c51e5ab41bf198a7df36329bd8ea8634e959ebf5e8416 not found: ID does not exist" Jan 26 23:23:00 crc kubenswrapper[4995]: I0126 23:23:00.533026 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" path="/var/lib/kubelet/pods/3d2e5512-c49b-4082-a5b0-44df42a443ee/volumes" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079389 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079877 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="pull" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079888 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="pull" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079899 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="util" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079905 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="util" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079915 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-content" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079921 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-content" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079934 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-utilities" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079940 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="extract-utilities" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079949 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="extract" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079954 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="extract" Jan 26 23:23:05 crc kubenswrapper[4995]: E0126 23:23:05.079965 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.079971 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.080070 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cbffe6c-1d98-4769-8f02-7a966a63ef38" containerName="extract" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.080081 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d2e5512-c49b-4082-a5b0-44df42a443ee" containerName="registry-server" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.080504 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.082637 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-96btn" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.111885 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.155331 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"openstack-operator-controller-init-f8d7d87cb-d4ktp\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.257064 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"openstack-operator-controller-init-f8d7d87cb-d4ktp\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.279357 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"openstack-operator-controller-init-f8d7d87cb-d4ktp\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.403713 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:05 crc kubenswrapper[4995]: I0126 23:23:05.837531 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:23:05 crc kubenswrapper[4995]: W0126 23:23:05.851888 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod892f33f6_3409_407d_b85b_922b8bdbfa16.slice/crio-c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0 WatchSource:0}: Error finding container c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0: Status 404 returned error can't find the container with id c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0 Jan 26 23:23:06 crc kubenswrapper[4995]: I0126 23:23:06.665574 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerStarted","Data":"c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0"} Jan 26 23:23:10 crc kubenswrapper[4995]: I0126 23:23:10.694193 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerStarted","Data":"6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114"} Jan 26 23:23:10 crc kubenswrapper[4995]: I0126 23:23:10.695047 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:10 crc kubenswrapper[4995]: I0126 23:23:10.744946 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" podStartSLOduration=1.76987821 podStartE2EDuration="5.744918799s" podCreationTimestamp="2026-01-26 23:23:05 +0000 UTC" firstStartedPulling="2026-01-26 23:23:05.854974823 +0000 UTC m=+890.019682328" lastFinishedPulling="2026-01-26 23:23:09.830015442 +0000 UTC m=+893.994722917" observedRunningTime="2026-01-26 23:23:10.733414792 +0000 UTC m=+894.898122297" watchObservedRunningTime="2026-01-26 23:23:10.744918799 +0000 UTC m=+894.909626294" Jan 26 23:23:15 crc kubenswrapper[4995]: I0126 23:23:15.408146 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:23:16 crc kubenswrapper[4995]: I0126 23:23:16.866842 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:16 crc kubenswrapper[4995]: I0126 23:23:16.868387 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:16 crc kubenswrapper[4995]: I0126 23:23:16.886139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.034041 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.034135 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.034372 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.135948 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136022 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136057 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136482 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.136523 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.155508 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"redhat-marketplace-qpbtn\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.184070 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.403500 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.759445 4995 generic.go:334] "Generic (PLEG): container finished" podID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" exitCode=0 Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.759539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a"} Jan 26 23:23:17 crc kubenswrapper[4995]: I0126 23:23:17.759733 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerStarted","Data":"68f5c163dd5a5f769fd0700b2fbf0f86c445afc1ada1ec7b333f08125ea3f657"} Jan 26 23:23:18 crc kubenswrapper[4995]: I0126 23:23:18.780476 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerStarted","Data":"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d"} Jan 26 23:23:19 crc kubenswrapper[4995]: I0126 23:23:19.788823 4995 generic.go:334] "Generic (PLEG): container finished" podID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" exitCode=0 Jan 26 23:23:19 crc kubenswrapper[4995]: I0126 23:23:19.788907 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d"} Jan 26 23:23:21 crc kubenswrapper[4995]: I0126 23:23:21.805234 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerStarted","Data":"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba"} Jan 26 23:23:21 crc kubenswrapper[4995]: I0126 23:23:21.824763 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qpbtn" podStartSLOduration=2.924478223 podStartE2EDuration="5.824748646s" podCreationTimestamp="2026-01-26 23:23:16 +0000 UTC" firstStartedPulling="2026-01-26 23:23:17.761340982 +0000 UTC m=+901.926048457" lastFinishedPulling="2026-01-26 23:23:20.661611375 +0000 UTC m=+904.826318880" observedRunningTime="2026-01-26 23:23:21.820970452 +0000 UTC m=+905.985677917" watchObservedRunningTime="2026-01-26 23:23:21.824748646 +0000 UTC m=+905.989456111" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.190117 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.191561 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.227670 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.315640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.315843 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.315991 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.417505 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.417592 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.417634 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.418202 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.418254 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.459043 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"certified-operators-7j9zc\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.550241 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:23 crc kubenswrapper[4995]: I0126 23:23:23.941847 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:24 crc kubenswrapper[4995]: I0126 23:23:24.830854 4995 generic.go:334] "Generic (PLEG): container finished" podID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" exitCode=0 Jan 26 23:23:24 crc kubenswrapper[4995]: I0126 23:23:24.830944 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b"} Jan 26 23:23:24 crc kubenswrapper[4995]: I0126 23:23:24.831197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerStarted","Data":"04da271e4f7505c2ffd196e4561fda52d5add96fbb0f643f634bb2bd36cc7757"} Jan 26 23:23:25 crc kubenswrapper[4995]: I0126 23:23:25.844529 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerStarted","Data":"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492"} Jan 26 23:23:26 crc kubenswrapper[4995]: I0126 23:23:26.852960 4995 generic.go:334] "Generic (PLEG): container finished" podID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" exitCode=0 Jan 26 23:23:26 crc kubenswrapper[4995]: I0126 23:23:26.852995 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492"} Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.191340 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.191491 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.249228 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.860806 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerStarted","Data":"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca"} Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.902934 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7j9zc" podStartSLOduration=2.479602888 podStartE2EDuration="4.902915048s" podCreationTimestamp="2026-01-26 23:23:23 +0000 UTC" firstStartedPulling="2026-01-26 23:23:24.832627284 +0000 UTC m=+908.997334759" lastFinishedPulling="2026-01-26 23:23:27.255939454 +0000 UTC m=+911.420646919" observedRunningTime="2026-01-26 23:23:27.898151999 +0000 UTC m=+912.062859464" watchObservedRunningTime="2026-01-26 23:23:27.902915048 +0000 UTC m=+912.067622513" Jan 26 23:23:27 crc kubenswrapper[4995]: I0126 23:23:27.963148 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:29 crc kubenswrapper[4995]: I0126 23:23:29.580431 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:30 crc kubenswrapper[4995]: I0126 23:23:30.883303 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qpbtn" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" containerID="cri-o://76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" gracePeriod=2 Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.795528 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892241 4995 generic.go:334] "Generic (PLEG): container finished" podID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" exitCode=0 Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892315 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba"} Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892344 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qpbtn" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892404 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qpbtn" event={"ID":"44311f3a-63ea-444c-bda7-470d8c27fbcb","Type":"ContainerDied","Data":"68f5c163dd5a5f769fd0700b2fbf0f86c445afc1ada1ec7b333f08125ea3f657"} Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.892436 4995 scope.go:117] "RemoveContainer" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.913530 4995 scope.go:117] "RemoveContainer" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.935228 4995 scope.go:117] "RemoveContainer" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.938355 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") pod \"44311f3a-63ea-444c-bda7-470d8c27fbcb\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.938719 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") pod \"44311f3a-63ea-444c-bda7-470d8c27fbcb\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.938825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") pod \"44311f3a-63ea-444c-bda7-470d8c27fbcb\" (UID: \"44311f3a-63ea-444c-bda7-470d8c27fbcb\") " Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.941065 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities" (OuterVolumeSpecName: "utilities") pod "44311f3a-63ea-444c-bda7-470d8c27fbcb" (UID: "44311f3a-63ea-444c-bda7-470d8c27fbcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.959293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7" (OuterVolumeSpecName: "kube-api-access-pknq7") pod "44311f3a-63ea-444c-bda7-470d8c27fbcb" (UID: "44311f3a-63ea-444c-bda7-470d8c27fbcb"). InnerVolumeSpecName "kube-api-access-pknq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.990276 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44311f3a-63ea-444c-bda7-470d8c27fbcb" (UID: "44311f3a-63ea-444c-bda7-470d8c27fbcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.996915 4995 scope.go:117] "RemoveContainer" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" Jan 26 23:23:31 crc kubenswrapper[4995]: E0126 23:23:31.998940 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba\": container with ID starting with 76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba not found: ID does not exist" containerID="76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.998982 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba"} err="failed to get container status \"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba\": rpc error: code = NotFound desc = could not find container \"76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba\": container with ID starting with 76f334d8cbea83a61bac930a31211de45e16d406e9493bb463e29c6b6ed31aba not found: ID does not exist" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999012 4995 scope.go:117] "RemoveContainer" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" Jan 26 23:23:31 crc kubenswrapper[4995]: E0126 23:23:31.999321 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d\": container with ID starting with 439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d not found: ID does not exist" containerID="439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999373 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d"} err="failed to get container status \"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d\": rpc error: code = NotFound desc = could not find container \"439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d\": container with ID starting with 439194bc5bdc64c245bfa5c9f74f1819dbf06631959b22f601b3433eb2d1431d not found: ID does not exist" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999406 4995 scope.go:117] "RemoveContainer" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" Jan 26 23:23:31 crc kubenswrapper[4995]: E0126 23:23:31.999737 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a\": container with ID starting with c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a not found: ID does not exist" containerID="c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a" Jan 26 23:23:31 crc kubenswrapper[4995]: I0126 23:23:31.999779 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a"} err="failed to get container status \"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a\": rpc error: code = NotFound desc = could not find container \"c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a\": container with ID starting with c8d1ecfc9dd01667a771674a55628ffa31769ab0b9f201aa33984feddb9a3e3a not found: ID does not exist" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.042172 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.042210 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44311f3a-63ea-444c-bda7-470d8c27fbcb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.042225 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pknq7\" (UniqueName: \"kubernetes.io/projected/44311f3a-63ea-444c-bda7-470d8c27fbcb-kube-api-access-pknq7\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.223426 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.231650 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qpbtn"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.529995 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" path="/var/lib/kubelet/pods/44311f3a-63ea-444c-bda7-470d8c27fbcb/volumes" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.680973 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8"] Jan 26 23:23:32 crc kubenswrapper[4995]: E0126 23:23:32.681215 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-content" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681226 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-content" Jan 26 23:23:32 crc kubenswrapper[4995]: E0126 23:23:32.681240 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681246 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" Jan 26 23:23:32 crc kubenswrapper[4995]: E0126 23:23:32.681262 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-utilities" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681269 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="extract-utilities" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681368 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="44311f3a-63ea-444c-bda7-470d8c27fbcb" containerName="registry-server" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.681766 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.683930 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vgt2l" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.697510 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.698558 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.700186 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-rf92p" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.717565 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.718971 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.723011 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tbnjf" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.726778 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.742418 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.743619 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.745395 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j8vtt" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.765417 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.783645 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.796052 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.798991 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.803237 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6xqrc" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.808593 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.836784 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.854964 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmq6\" (UniqueName: \"kubernetes.io/projected/4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6-kube-api-access-xxmq6\") pod \"glance-operator-controller-manager-67dd55ff59-gdvdp\" (UID: \"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.855129 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whrdd\" (UniqueName: \"kubernetes.io/projected/90ae2b4f-43e9-4a37-abc5-d90e958e540b-kube-api-access-whrdd\") pod \"designate-operator-controller-manager-77554cdc5c-kgv2f\" (UID: \"90ae2b4f-43e9-4a37-abc5-d90e958e540b\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.855209 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnm6s\" (UniqueName: \"kubernetes.io/projected/c5dd6b1a-1515-4ad6-b89e-0c7253a71281-kube-api-access-gnm6s\") pod \"barbican-operator-controller-manager-6987f66698-x2fg8\" (UID: \"c5dd6b1a-1515-4ad6-b89e-0c7253a71281\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.855317 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthkc\" (UniqueName: \"kubernetes.io/projected/70dc0d96-2ba1-487e-8ffc-a98725e002c4-kube-api-access-zthkc\") pod \"cinder-operator-controller-manager-655bf9cfbb-pzzq9\" (UID: \"70dc0d96-2ba1-487e-8ffc-a98725e002c4\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.860529 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.862627 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.876958 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-wjvbd" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.885586 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.910262 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.911372 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.914690 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-b6wmk" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.914839 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.949565 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956356 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnm6s\" (UniqueName: \"kubernetes.io/projected/c5dd6b1a-1515-4ad6-b89e-0c7253a71281-kube-api-access-gnm6s\") pod \"barbican-operator-controller-manager-6987f66698-x2fg8\" (UID: \"c5dd6b1a-1515-4ad6-b89e-0c7253a71281\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956431 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zthkc\" (UniqueName: \"kubernetes.io/projected/70dc0d96-2ba1-487e-8ffc-a98725e002c4-kube-api-access-zthkc\") pod \"cinder-operator-controller-manager-655bf9cfbb-pzzq9\" (UID: \"70dc0d96-2ba1-487e-8ffc-a98725e002c4\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956466 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9t7n\" (UniqueName: \"kubernetes.io/projected/e29f1042-97e4-430c-a262-53ab3cca40d9-kube-api-access-c9t7n\") pod \"heat-operator-controller-manager-954b94f75-7q5kj\" (UID: \"e29f1042-97e4-430c-a262-53ab3cca40d9\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956511 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkpn2\" (UniqueName: \"kubernetes.io/projected/bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e-kube-api-access-tkpn2\") pod \"horizon-operator-controller-manager-77d5c5b54f-r7mgm\" (UID: \"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956538 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmq6\" (UniqueName: \"kubernetes.io/projected/4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6-kube-api-access-xxmq6\") pod \"glance-operator-controller-manager-67dd55ff59-gdvdp\" (UID: \"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.956561 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whrdd\" (UniqueName: \"kubernetes.io/projected/90ae2b4f-43e9-4a37-abc5-d90e958e540b-kube-api-access-whrdd\") pod \"designate-operator-controller-manager-77554cdc5c-kgv2f\" (UID: \"90ae2b4f-43e9-4a37-abc5-d90e958e540b\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.974163 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9"] Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.975179 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.983052 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whrdd\" (UniqueName: \"kubernetes.io/projected/90ae2b4f-43e9-4a37-abc5-d90e958e540b-kube-api-access-whrdd\") pod \"designate-operator-controller-manager-77554cdc5c-kgv2f\" (UID: \"90ae2b4f-43e9-4a37-abc5-d90e958e540b\") " pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:32 crc kubenswrapper[4995]: I0126 23:23:32.990921 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-z4krd" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.000619 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnm6s\" (UniqueName: \"kubernetes.io/projected/c5dd6b1a-1515-4ad6-b89e-0c7253a71281-kube-api-access-gnm6s\") pod \"barbican-operator-controller-manager-6987f66698-x2fg8\" (UID: \"c5dd6b1a-1515-4ad6-b89e-0c7253a71281\") " pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.000632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zthkc\" (UniqueName: \"kubernetes.io/projected/70dc0d96-2ba1-487e-8ffc-a98725e002c4-kube-api-access-zthkc\") pod \"cinder-operator-controller-manager-655bf9cfbb-pzzq9\" (UID: \"70dc0d96-2ba1-487e-8ffc-a98725e002c4\") " pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.000633 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmq6\" (UniqueName: \"kubernetes.io/projected/4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6-kube-api-access-xxmq6\") pod \"glance-operator-controller-manager-67dd55ff59-gdvdp\" (UID: \"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6\") " pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.013991 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.017667 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.033288 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.034715 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.036864 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.038906 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-zzdgl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.052598 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.053660 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.056281 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sqc4z" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058205 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9t7n\" (UniqueName: \"kubernetes.io/projected/e29f1042-97e4-430c-a262-53ab3cca40d9-kube-api-access-c9t7n\") pod \"heat-operator-controller-manager-954b94f75-7q5kj\" (UID: \"e29f1042-97e4-430c-a262-53ab3cca40d9\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058248 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgvms\" (UniqueName: \"kubernetes.io/projected/3a2f8d86-155b-476b-86c4-fda3eb595fc9-kube-api-access-wgvms\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058294 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkpn2\" (UniqueName: \"kubernetes.io/projected/bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e-kube-api-access-tkpn2\") pod \"horizon-operator-controller-manager-77d5c5b54f-r7mgm\" (UID: \"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058334 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.058373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6crnf\" (UniqueName: \"kubernetes.io/projected/555394ee-9ad5-417f-9698-646ba1ddc5f2-kube-api-access-6crnf\") pod \"ironic-operator-controller-manager-768b776ffb-6gtf9\" (UID: \"555394ee-9ad5-417f-9698-646ba1ddc5f2\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.068312 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.074572 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.075170 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9t7n\" (UniqueName: \"kubernetes.io/projected/e29f1042-97e4-430c-a262-53ab3cca40d9-kube-api-access-c9t7n\") pod \"heat-operator-controller-manager-954b94f75-7q5kj\" (UID: \"e29f1042-97e4-430c-a262-53ab3cca40d9\") " pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.083244 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.084560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkpn2\" (UniqueName: \"kubernetes.io/projected/bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e-kube-api-access-tkpn2\") pod \"horizon-operator-controller-manager-77d5c5b54f-r7mgm\" (UID: \"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.091372 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.092404 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.098624 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.099201 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-8gqwc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.108222 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.110372 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4krhf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.129442 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.138722 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.148125 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.155306 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.156498 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.158645 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-cbvjx" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159436 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6crnf\" (UniqueName: \"kubernetes.io/projected/555394ee-9ad5-417f-9698-646ba1ddc5f2-kube-api-access-6crnf\") pod \"ironic-operator-controller-manager-768b776ffb-6gtf9\" (UID: \"555394ee-9ad5-417f-9698-646ba1ddc5f2\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159483 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcjzd\" (UniqueName: \"kubernetes.io/projected/0d39c5fc-e526-46e8-8773-6bf87e938b06-kube-api-access-jcjzd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh\" (UID: \"0d39c5fc-e526-46e8-8773-6bf87e938b06\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159527 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgvms\" (UniqueName: \"kubernetes.io/projected/3a2f8d86-155b-476b-86c4-fda3eb595fc9-kube-api-access-wgvms\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159563 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kklj\" (UniqueName: \"kubernetes.io/projected/fd2183e6-a9e4-44b8-861f-9a545aac1c12-kube-api-access-9kklj\") pod \"manila-operator-controller-manager-849fcfbb6b-w2gfg\" (UID: \"fd2183e6-a9e4-44b8-861f-9a545aac1c12\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159607 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcldk\" (UniqueName: \"kubernetes.io/projected/235cf5b2-2094-4345-bf37-edbcb2e5e48f-kube-api-access-fcldk\") pod \"keystone-operator-controller-manager-55f684fd56-gzjxj\" (UID: \"235cf5b2-2094-4345-bf37-edbcb2e5e48f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.159648 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.159827 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.159870 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:33.659853205 +0000 UTC m=+917.824560670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.163236 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.170429 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.171561 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.173800 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-9l5z9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.176441 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.181882 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6crnf\" (UniqueName: \"kubernetes.io/projected/555394ee-9ad5-417f-9698-646ba1ddc5f2-kube-api-access-6crnf\") pod \"ironic-operator-controller-manager-768b776ffb-6gtf9\" (UID: \"555394ee-9ad5-417f-9698-646ba1ddc5f2\") " pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.187646 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgvms\" (UniqueName: \"kubernetes.io/projected/3a2f8d86-155b-476b-86c4-fda3eb595fc9-kube-api-access-wgvms\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.189077 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.190716 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.192355 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lwlfd" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.196731 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.197572 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.205878 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.206082 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-f9r8r" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.207545 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.214599 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.215938 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.216011 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.218725 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-j5sdv" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.224204 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.237187 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.255607 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.257973 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.260820 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9qbnq" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261777 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j98vr\" (UniqueName: \"kubernetes.io/projected/4e9b965f-6060-43e7-aa1c-b73472075bae-kube-api-access-j98vr\") pod \"nova-operator-controller-manager-7f54b7d6d4-cf7gh\" (UID: \"4e9b965f-6060-43e7-aa1c-b73472075bae\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261810 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nff6t\" (UniqueName: \"kubernetes.io/projected/cfbd9d32-25ae-4369-8e16-ce174c0802dc-kube-api-access-nff6t\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261887 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcjzd\" (UniqueName: \"kubernetes.io/projected/0d39c5fc-e526-46e8-8773-6bf87e938b06-kube-api-access-jcjzd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh\" (UID: \"0d39c5fc-e526-46e8-8773-6bf87e938b06\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261918 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mrhd\" (UniqueName: \"kubernetes.io/projected/1b364747-4f4c-4431-becf-0f2b30bc9d20-kube-api-access-5mrhd\") pod \"ovn-operator-controller-manager-6f75f45d54-z899w\" (UID: \"1b364747-4f4c-4431-becf-0f2b30bc9d20\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261943 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fncvh\" (UniqueName: \"kubernetes.io/projected/ce22ba19-581c-4f75-9bd6-4de0538779a2-kube-api-access-fncvh\") pod \"octavia-operator-controller-manager-756f86fc74-7s666\" (UID: \"ce22ba19-581c-4f75-9bd6-4de0538779a2\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.261969 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm6cv\" (UniqueName: \"kubernetes.io/projected/03047106-c820-43c2-bee1-c8b1fb3a0a0c-kube-api-access-xm6cv\") pod \"neutron-operator-controller-manager-7ffd8d76d4-p47jp\" (UID: \"03047106-c820-43c2-bee1-c8b1fb3a0a0c\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.262010 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kklj\" (UniqueName: \"kubernetes.io/projected/fd2183e6-a9e4-44b8-861f-9a545aac1c12-kube-api-access-9kklj\") pod \"manila-operator-controller-manager-849fcfbb6b-w2gfg\" (UID: \"fd2183e6-a9e4-44b8-861f-9a545aac1c12\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.262042 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcldk\" (UniqueName: \"kubernetes.io/projected/235cf5b2-2094-4345-bf37-edbcb2e5e48f-kube-api-access-fcldk\") pod \"keystone-operator-controller-manager-55f684fd56-gzjxj\" (UID: \"235cf5b2-2094-4345-bf37-edbcb2e5e48f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.262072 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.270575 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.300873 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.316283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kklj\" (UniqueName: \"kubernetes.io/projected/fd2183e6-a9e4-44b8-861f-9a545aac1c12-kube-api-access-9kklj\") pod \"manila-operator-controller-manager-849fcfbb6b-w2gfg\" (UID: \"fd2183e6-a9e4-44b8-861f-9a545aac1c12\") " pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.321427 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcldk\" (UniqueName: \"kubernetes.io/projected/235cf5b2-2094-4345-bf37-edbcb2e5e48f-kube-api-access-fcldk\") pod \"keystone-operator-controller-manager-55f684fd56-gzjxj\" (UID: \"235cf5b2-2094-4345-bf37-edbcb2e5e48f\") " pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.329386 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcjzd\" (UniqueName: \"kubernetes.io/projected/0d39c5fc-e526-46e8-8773-6bf87e938b06-kube-api-access-jcjzd\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh\" (UID: \"0d39c5fc-e526-46e8-8773-6bf87e938b06\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363766 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9wr6\" (UniqueName: \"kubernetes.io/projected/aba99191-8a3a-47dc-8dca-136de682a567-kube-api-access-k9wr6\") pod \"swift-operator-controller-manager-547cbdb99f-b4kzb\" (UID: \"aba99191-8a3a-47dc-8dca-136de682a567\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363815 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363847 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqh5\" (UniqueName: \"kubernetes.io/projected/931ac40b-6695-41c7-9d8f-c8eefca6e587-kube-api-access-rjqh5\") pod \"placement-operator-controller-manager-79d5ccc684-5zhml\" (UID: \"931ac40b-6695-41c7-9d8f-c8eefca6e587\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363968 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j98vr\" (UniqueName: \"kubernetes.io/projected/4e9b965f-6060-43e7-aa1c-b73472075bae-kube-api-access-j98vr\") pod \"nova-operator-controller-manager-7f54b7d6d4-cf7gh\" (UID: \"4e9b965f-6060-43e7-aa1c-b73472075bae\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.363992 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nff6t\" (UniqueName: \"kubernetes.io/projected/cfbd9d32-25ae-4369-8e16-ce174c0802dc-kube-api-access-nff6t\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.364031 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mrhd\" (UniqueName: \"kubernetes.io/projected/1b364747-4f4c-4431-becf-0f2b30bc9d20-kube-api-access-5mrhd\") pod \"ovn-operator-controller-manager-6f75f45d54-z899w\" (UID: \"1b364747-4f4c-4431-becf-0f2b30bc9d20\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.364053 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fncvh\" (UniqueName: \"kubernetes.io/projected/ce22ba19-581c-4f75-9bd6-4de0538779a2-kube-api-access-fncvh\") pod \"octavia-operator-controller-manager-756f86fc74-7s666\" (UID: \"ce22ba19-581c-4f75-9bd6-4de0538779a2\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.364076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm6cv\" (UniqueName: \"kubernetes.io/projected/03047106-c820-43c2-bee1-c8b1fb3a0a0c-kube-api-access-xm6cv\") pod \"neutron-operator-controller-manager-7ffd8d76d4-p47jp\" (UID: \"03047106-c820-43c2-bee1-c8b1fb3a0a0c\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.364229 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.364283 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:33.864265299 +0000 UTC m=+918.028972764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.394607 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fncvh\" (UniqueName: \"kubernetes.io/projected/ce22ba19-581c-4f75-9bd6-4de0538779a2-kube-api-access-fncvh\") pod \"octavia-operator-controller-manager-756f86fc74-7s666\" (UID: \"ce22ba19-581c-4f75-9bd6-4de0538779a2\") " pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.397964 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nff6t\" (UniqueName: \"kubernetes.io/projected/cfbd9d32-25ae-4369-8e16-ce174c0802dc-kube-api-access-nff6t\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.398094 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j98vr\" (UniqueName: \"kubernetes.io/projected/4e9b965f-6060-43e7-aa1c-b73472075bae-kube-api-access-j98vr\") pod \"nova-operator-controller-manager-7f54b7d6d4-cf7gh\" (UID: \"4e9b965f-6060-43e7-aa1c-b73472075bae\") " pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.402388 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mrhd\" (UniqueName: \"kubernetes.io/projected/1b364747-4f4c-4431-becf-0f2b30bc9d20-kube-api-access-5mrhd\") pod \"ovn-operator-controller-manager-6f75f45d54-z899w\" (UID: \"1b364747-4f4c-4431-becf-0f2b30bc9d20\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.405634 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.406608 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.411956 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k86gl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.413260 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm6cv\" (UniqueName: \"kubernetes.io/projected/03047106-c820-43c2-bee1-c8b1fb3a0a0c-kube-api-access-xm6cv\") pod \"neutron-operator-controller-manager-7ffd8d76d4-p47jp\" (UID: \"03047106-c820-43c2-bee1-c8b1fb3a0a0c\") " pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.417284 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.453386 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.462703 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.464855 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.465015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjqh5\" (UniqueName: \"kubernetes.io/projected/931ac40b-6695-41c7-9d8f-c8eefca6e587-kube-api-access-rjqh5\") pod \"placement-operator-controller-manager-79d5ccc684-5zhml\" (UID: \"931ac40b-6695-41c7-9d8f-c8eefca6e587\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.465130 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnsn\" (UniqueName: \"kubernetes.io/projected/fd5d672d-1c27-4782-bbf3-c6d936a8c9bb-kube-api-access-spnsn\") pod \"telemetry-operator-controller-manager-799bc87c89-bmdgt\" (UID: \"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.465159 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9wr6\" (UniqueName: \"kubernetes.io/projected/aba99191-8a3a-47dc-8dca-136de682a567-kube-api-access-k9wr6\") pod \"swift-operator-controller-manager-547cbdb99f-b4kzb\" (UID: \"aba99191-8a3a-47dc-8dca-136de682a567\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.468475 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-n9dct" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.470229 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.475465 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.496677 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.497651 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9wr6\" (UniqueName: \"kubernetes.io/projected/aba99191-8a3a-47dc-8dca-136de682a567-kube-api-access-k9wr6\") pod \"swift-operator-controller-manager-547cbdb99f-b4kzb\" (UID: \"aba99191-8a3a-47dc-8dca-136de682a567\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.498025 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjqh5\" (UniqueName: \"kubernetes.io/projected/931ac40b-6695-41c7-9d8f-c8eefca6e587-kube-api-access-rjqh5\") pod \"placement-operator-controller-manager-79d5ccc684-5zhml\" (UID: \"931ac40b-6695-41c7-9d8f-c8eefca6e587\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.508124 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.524190 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.525121 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.528025 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fw4c6" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.528223 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.528285 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c1f5873_cf2b_4fd3_a83e_97611d3ee0e6.slice/crio-ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3 WatchSource:0}: Error finding container ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3: Status 404 returned error can't find the container with id ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.534235 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.542036 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.542975 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.545274 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.546322 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.546585 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8npzj" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.552314 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.554794 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.555833 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.556322 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.566856 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxv76\" (UniqueName: \"kubernetes.io/projected/b60b13f0-97c0-42b9-85fd-2a51218c9ac1-kube-api-access-mxv76\") pod \"test-operator-controller-manager-69797bbcbd-kjmpf\" (UID: \"b60b13f0-97c0-42b9-85fd-2a51218c9ac1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.566987 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spnsn\" (UniqueName: \"kubernetes.io/projected/fd5d672d-1c27-4782-bbf3-c6d936a8c9bb-kube-api-access-spnsn\") pod \"telemetry-operator-controller-manager-799bc87c89-bmdgt\" (UID: \"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.575191 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.585205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spnsn\" (UniqueName: \"kubernetes.io/projected/fd5d672d-1c27-4782-bbf3-c6d936a8c9bb-kube-api-access-spnsn\") pod \"telemetry-operator-controller-manager-799bc87c89-bmdgt\" (UID: \"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb\") " pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.597366 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.599764 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.600631 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.601379 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70dc0d96_2ba1_487e_8ffc_a98725e002c4.slice/crio-6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646 WatchSource:0}: Error finding container 6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646: Status 404 returned error can't find the container with id 6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.603397 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-v6k5n" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.605479 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.614542 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.626524 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.629162 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.630947 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.663554 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668079 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbx5\" (UniqueName: \"kubernetes.io/projected/03478ac9-bd6b-4726-86b4-cd29045b6dc0-kube-api-access-lbbx5\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668131 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghlvz\" (UniqueName: \"kubernetes.io/projected/a0641fd3-88a7-4fb2-93f9-ffce84aadef2-kube-api-access-ghlvz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dk2dl\" (UID: \"a0641fd3-88a7-4fb2-93f9-ffce84aadef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668162 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxv76\" (UniqueName: \"kubernetes.io/projected/b60b13f0-97c0-42b9-85fd-2a51218c9ac1-kube-api-access-mxv76\") pod \"test-operator-controller-manager-69797bbcbd-kjmpf\" (UID: \"b60b13f0-97c0-42b9-85fd-2a51218c9ac1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668196 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668228 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"watcher-operator-controller-manager-7b8f755c7-tlv6g\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668244 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.668306 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.668417 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.668455 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.668442104 +0000 UTC m=+918.833149569 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.689165 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxv76\" (UniqueName: \"kubernetes.io/projected/b60b13f0-97c0-42b9-85fd-2a51218c9ac1-kube-api-access-mxv76\") pod \"test-operator-controller-manager-69797bbcbd-kjmpf\" (UID: \"b60b13f0-97c0-42b9-85fd-2a51218c9ac1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.725230 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.767370 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770824 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbbx5\" (UniqueName: \"kubernetes.io/projected/03478ac9-bd6b-4726-86b4-cd29045b6dc0-kube-api-access-lbbx5\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghlvz\" (UniqueName: \"kubernetes.io/projected/a0641fd3-88a7-4fb2-93f9-ffce84aadef2-kube-api-access-ghlvz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dk2dl\" (UID: \"a0641fd3-88a7-4fb2-93f9-ffce84aadef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770905 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"watcher-operator-controller-manager-7b8f755c7-tlv6g\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.770955 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771092 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771148 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.271130738 +0000 UTC m=+918.435838203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771637 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.771668 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.271659572 +0000 UTC m=+918.436367037 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.782351 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.800146 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f"] Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.802690 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"watcher-operator-controller-manager-7b8f755c7-tlv6g\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.802927 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbbx5\" (UniqueName: \"kubernetes.io/projected/03478ac9-bd6b-4726-86b4-cd29045b6dc0-kube-api-access-lbbx5\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.806273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghlvz\" (UniqueName: \"kubernetes.io/projected/a0641fd3-88a7-4fb2-93f9-ffce84aadef2-kube-api-access-ghlvz\") pod \"rabbitmq-cluster-operator-manager-668c99d594-dk2dl\" (UID: \"a0641fd3-88a7-4fb2-93f9-ffce84aadef2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.830596 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.832505 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ae2b4f_43e9_4a37_abc5_d90e958e540b.slice/crio-79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3 WatchSource:0}: Error finding container 79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3: Status 404 returned error can't find the container with id 79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.860536 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.876931 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.877130 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: E0126 23:23:33.877189 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:34.877172456 +0000 UTC m=+919.041879921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.890862 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm"] Jan 26 23:23:33 crc kubenswrapper[4995]: W0126 23:23:33.926876 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd8c5b8d_f13d_48a8_82ff_9928fb5b5b5e.slice/crio-1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3 WatchSource:0}: Error finding container 1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3: Status 404 returned error can't find the container with id 1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3 Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.928993 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.930551 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" event={"ID":"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6","Type":"ContainerStarted","Data":"ce432dd8a92e7f2cea330114c0cee874a27a397cbaf29a92788fc211a9fd15f3"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.941031 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" event={"ID":"90ae2b4f-43e9-4a37-abc5-d90e958e540b","Type":"ContainerStarted","Data":"79e9de4c165ef360a1c0b15eab1fbb6065659c3169cb094a5bdfb0000ea373a3"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.958090 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" event={"ID":"c5dd6b1a-1515-4ad6-b89e-0c7253a71281","Type":"ContainerStarted","Data":"fa010a063338fd4ba88d0d5ab493d394b1f1b84f432065519814e0b18329e32d"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.959752 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" event={"ID":"e29f1042-97e4-430c-a262-53ab3cca40d9","Type":"ContainerStarted","Data":"71f098930a94cb3e775748336a363501513777b8facfcf6ad0c544a897977bd2"} Jan 26 23:23:33 crc kubenswrapper[4995]: I0126 23:23:33.978154 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" event={"ID":"70dc0d96-2ba1-487e-8ffc-a98725e002c4","Type":"ContainerStarted","Data":"6bf28be1ab73e9905e3a89920972f95a74143032dccbb74d789a1813bd607646"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.024050 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.074495 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.113384 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod555394ee_9ad5_417f_9698_646ba1ddc5f2.slice/crio-8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf WatchSource:0}: Error finding container 8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf: Status 404 returned error can't find the container with id 8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.203356 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.218264 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2183e6_a9e4_44b8_861f_9a545aac1c12.slice/crio-b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717 WatchSource:0}: Error finding container b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717: Status 404 returned error can't find the container with id b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717 Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.233606 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.250158 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.285685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.285767 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.285853 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.285917 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:35.285898461 +0000 UTC m=+919.450605926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.285970 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.286034 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:35.286016804 +0000 UTC m=+919.450724339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.541267 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd5d672d_1c27_4782_bbf3_c6d936a8c9bb.slice/crio-c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56 WatchSource:0}: Error finding container c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56: Status 404 returned error can't find the container with id c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56 Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.557373 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.557621 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.557634 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.576349 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.577045 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode28ba494_e3ae_4294_8018_e9b8d7a1f96a.slice/crio-8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f WatchSource:0}: Error finding container 8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f: Status 404 returned error can't find the container with id 8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.584444 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.223:5001/openstack-k8s-operators/watcher-operator:09b2d9800e2605016d087ebe1039eab09a5c2745,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fjxhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-7b8f755c7-tlv6g_openstack-operators(e28ba494-e3ae-4294-8018-e9b8d7a1f96a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.585755 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.587135 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.605411 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.690989 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.691313 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.691420 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:36.691392156 +0000 UTC m=+920.856099621 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.711917 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh"] Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.722952 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jcjzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh_openstack-operators(0d39c5fc-e526-46e8-8773-6bf87e938b06): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.724166 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podUID="0d39c5fc-e526-46e8-8773-6bf87e938b06" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.733316 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w"] Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.739402 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.743813 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b364747_4f4c_4431_becf_0f2b30bc9d20.slice/crio-2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc WatchSource:0}: Error finding container 2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc: Status 404 returned error can't find the container with id 2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.745959 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0641fd3_88a7_4fb2_93f9_ffce84aadef2.slice/crio-9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50 WatchSource:0}: Error finding container 9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50: Status 404 returned error can't find the container with id 9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50 Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.751276 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghlvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-dk2dl_openstack-operators(a0641fd3-88a7-4fb2-93f9-ffce84aadef2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.752417 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podUID="a0641fd3-88a7-4fb2-93f9-ffce84aadef2" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.782452 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml"] Jan 26 23:23:34 crc kubenswrapper[4995]: W0126 23:23:34.794227 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod931ac40b_6695_41c7_9d8f_c8eefca6e587.slice/crio-457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd WatchSource:0}: Error finding container 457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd: Status 404 returned error can't find the container with id 457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.797756 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rjqh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-5zhml_openstack-operators(931ac40b-6695-41c7-9d8f-c8eefca6e587): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.798932 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podUID="931ac40b-6695-41c7-9d8f-c8eefca6e587" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.896661 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.896792 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.896843 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:36.896829486 +0000 UTC m=+921.061536951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.983821 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerStarted","Data":"8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.984914 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" event={"ID":"4e9b965f-6060-43e7-aa1c-b73472075bae","Type":"ContainerStarted","Data":"98cbfe9f7b14a4d5efeb6d86491ae4a6e71a3235f9e1474331b1c8cdce0a471b"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.985338 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/openstack-k8s-operators/watcher-operator:09b2d9800e2605016d087ebe1039eab09a5c2745\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.986038 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" event={"ID":"fd2183e6-a9e4-44b8-861f-9a545aac1c12","Type":"ContainerStarted","Data":"b25eaa9d2f14448dad7bfd1aec42603f572a30085ef7d5fb8a24b5f38b816717"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.986900 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" event={"ID":"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e","Type":"ContainerStarted","Data":"1bba2597f5cd73aa73f7d3652ced902b755a2dc2df8baf5fb74a49b8899c2fd3"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.988132 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" event={"ID":"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb","Type":"ContainerStarted","Data":"c0dcc40692578aed101fc6f67b6d3afa2819467e675116c528282b680ea31d56"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.989129 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" event={"ID":"1b364747-4f4c-4431-becf-0f2b30bc9d20","Type":"ContainerStarted","Data":"2d1b4bee5070f8923c4bb70324dc0e80bdb812283a4fa1f49a36f5c24bb81ebc"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.990041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" event={"ID":"b60b13f0-97c0-42b9-85fd-2a51218c9ac1","Type":"ContainerStarted","Data":"76ffa1ad397a7f4ad1f1a63d13fc1270672ed78b210e725944382eabbd094d43"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.990930 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" event={"ID":"a0641fd3-88a7-4fb2-93f9-ffce84aadef2","Type":"ContainerStarted","Data":"9e22da7f158f2b8b8e8eeb5726435e908e1c9c41a596eb1979f825e22d57dc50"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.992029 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podUID="a0641fd3-88a7-4fb2-93f9-ffce84aadef2" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.992795 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" event={"ID":"aba99191-8a3a-47dc-8dca-136de682a567","Type":"ContainerStarted","Data":"8daecca1eb253faed5e10103bf43dc1ef1f33d212699a0dd5017e1c0de7f3880"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.993625 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" event={"ID":"03047106-c820-43c2-bee1-c8b1fb3a0a0c","Type":"ContainerStarted","Data":"920b323a81d13117b743c419afa7e9978d132e659c59f9586fd22972010d73a1"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.994362 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" event={"ID":"235cf5b2-2094-4345-bf37-edbcb2e5e48f","Type":"ContainerStarted","Data":"e9b2b1fd3e24002b96b21764e5374029b2a1a2aa608a263c5d3fd60e688c69ba"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.995230 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" event={"ID":"0d39c5fc-e526-46e8-8773-6bf87e938b06","Type":"ContainerStarted","Data":"0fcdbbcdfedca2eedb1606627094da500578bde4e5bb2c8d508748c9a648ea02"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.996025 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podUID="0d39c5fc-e526-46e8-8773-6bf87e938b06" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.996691 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" event={"ID":"931ac40b-6695-41c7-9d8f-c8eefca6e587","Type":"ContainerStarted","Data":"457b2b5a72f7084231cf138d41d0b1eff889928d4c395533fffa59a7ca72e4bd"} Jan 26 23:23:34 crc kubenswrapper[4995]: E0126 23:23:34.997653 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podUID="931ac40b-6695-41c7-9d8f-c8eefca6e587" Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.998154 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" event={"ID":"555394ee-9ad5-417f-9698-646ba1ddc5f2","Type":"ContainerStarted","Data":"8c730fce6cb0f1b27bd2e10f4a2472b950ce28d60d9b930455d34218edaf74cf"} Jan 26 23:23:34 crc kubenswrapper[4995]: I0126 23:23:34.999477 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" event={"ID":"ce22ba19-581c-4f75-9bd6-4de0538779a2","Type":"ContainerStarted","Data":"e782ccd6cff54a1a4868ff3e0b7aa7c52dc1815d6b8d0ca9c549c490a24d3e92"} Jan 26 23:23:35 crc kubenswrapper[4995]: I0126 23:23:35.188500 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:35 crc kubenswrapper[4995]: I0126 23:23:35.306641 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:35 crc kubenswrapper[4995]: I0126 23:23:35.306734 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.306820 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.306917 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:37.306893393 +0000 UTC m=+921.471600928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.306959 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:35 crc kubenswrapper[4995]: E0126 23:23:35.307052 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:37.307029677 +0000 UTC m=+921.471737202 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026457 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podUID="0d39c5fc-e526-46e8-8773-6bf87e938b06" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026476 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podUID="a0641fd3-88a7-4fb2-93f9-ffce84aadef2" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026526 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/openstack-k8s-operators/watcher-operator:09b2d9800e2605016d087ebe1039eab09a5c2745\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.026654 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podUID="931ac40b-6695-41c7-9d8f-c8eefca6e587" Jan 26 23:23:36 crc kubenswrapper[4995]: I0126 23:23:36.739494 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.739947 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.739987 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:40.739973436 +0000 UTC m=+924.904680901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: I0126 23:23:36.944449 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.944895 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:36 crc kubenswrapper[4995]: E0126 23:23:36.944972 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:40.944952484 +0000 UTC m=+925.109659939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.028177 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7j9zc" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" containerID="cri-o://acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" gracePeriod=2 Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.352651 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.352706 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352843 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352923 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:41.35290333 +0000 UTC m=+925.517610795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352855 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: E0126 23:23:37.352992 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:41.352977931 +0000 UTC m=+925.517685396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.466319 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.555704 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") pod \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.555834 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") pod \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.555880 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") pod \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\" (UID: \"fb8f3318-4432-4877-9e0c-1ae39d3a849e\") " Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.557523 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities" (OuterVolumeSpecName: "utilities") pod "fb8f3318-4432-4877-9e0c-1ae39d3a849e" (UID: "fb8f3318-4432-4877-9e0c-1ae39d3a849e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.562224 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp" (OuterVolumeSpecName: "kube-api-access-nvclp") pod "fb8f3318-4432-4877-9e0c-1ae39d3a849e" (UID: "fb8f3318-4432-4877-9e0c-1ae39d3a849e"). InnerVolumeSpecName "kube-api-access-nvclp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.606730 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb8f3318-4432-4877-9e0c-1ae39d3a849e" (UID: "fb8f3318-4432-4877-9e0c-1ae39d3a849e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.657971 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.658004 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb8f3318-4432-4877-9e0c-1ae39d3a849e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:37 crc kubenswrapper[4995]: I0126 23:23:37.658014 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvclp\" (UniqueName: \"kubernetes.io/projected/fb8f3318-4432-4877-9e0c-1ae39d3a849e-kube-api-access-nvclp\") on node \"crc\" DevicePath \"\"" Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043528 4995 generic.go:334] "Generic (PLEG): container finished" podID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" exitCode=0 Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca"} Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7j9zc" event={"ID":"fb8f3318-4432-4877-9e0c-1ae39d3a849e","Type":"ContainerDied","Data":"04da271e4f7505c2ffd196e4561fda52d5add96fbb0f643f634bb2bd36cc7757"} Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043619 4995 scope.go:117] "RemoveContainer" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.043629 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7j9zc" Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.076871 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.082634 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7j9zc"] Jan 26 23:23:38 crc kubenswrapper[4995]: I0126 23:23:38.528750 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" path="/var/lib/kubelet/pods/fb8f3318-4432-4877-9e0c-1ae39d3a849e/volumes" Jan 26 23:23:40 crc kubenswrapper[4995]: I0126 23:23:40.743652 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.743854 4995 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.743985 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert podName:3a2f8d86-155b-476b-86c4-fda3eb595fc9 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:48.743941918 +0000 UTC m=+932.908649383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert") pod "infra-operator-controller-manager-7d75bc88d5-n9dc8" (UID: "3a2f8d86-155b-476b-86c4-fda3eb595fc9") : secret "infra-operator-webhook-server-cert" not found Jan 26 23:23:40 crc kubenswrapper[4995]: I0126 23:23:40.947264 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.947456 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:40 crc kubenswrapper[4995]: E0126 23:23:40.947500 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:23:48.94748761 +0000 UTC m=+933.112195075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: I0126 23:23:41.454218 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:41 crc kubenswrapper[4995]: I0126 23:23:41.454578 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454457 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454833 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:49.454819567 +0000 UTC m=+933.619527032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454683 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:41 crc kubenswrapper[4995]: E0126 23:23:41.454994 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:23:49.454952741 +0000 UTC m=+933.619660246 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.134015 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.134839 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xm6cv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7ffd8d76d4-p47jp_openstack-operators(03047106-c820-43c2-bee1-c8b1fb3a0a0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.136092 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" podUID="03047106-c820-43c2-bee1-c8b1fb3a0a0c" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.760865 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.761185 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-whrdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-77554cdc5c-kgv2f_openstack-operators(90ae2b4f-43e9-4a37-abc5-d90e958e540b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.762975 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" podUID="90ae2b4f-43e9-4a37-abc5-d90e958e540b" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.763284 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.779632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3a2f8d86-155b-476b-86c4-fda3eb595fc9-cert\") pod \"infra-operator-controller-manager-7d75bc88d5-n9dc8\" (UID: \"3a2f8d86-155b-476b-86c4-fda3eb595fc9\") " pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.851057 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:23:48 crc kubenswrapper[4995]: I0126 23:23:48.967986 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.968173 4995 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:48 crc kubenswrapper[4995]: E0126 23:23:48.968271 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert podName:cfbd9d32-25ae-4369-8e16-ce174c0802dc nodeName:}" failed. No retries permitted until 2026-01-26 23:24:04.968245726 +0000 UTC m=+949.132953191 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" (UID: "cfbd9d32-25ae-4369-8e16-ce174c0802dc") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.137527 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:14786c3a66c41213a03d6375c03209f22d439dd6e752317ddcbe21dda66bb569\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" podUID="03047106-c820-43c2-bee1-c8b1fb3a0a0c" Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.139145 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/designate-operator@sha256:d26a32730ba8b64e98f68194bd1a766aadc942392b24fa6a2cf1c136969dd99f\\\"\"" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" podUID="90ae2b4f-43e9-4a37-abc5-d90e958e540b" Jan 26 23:23:49 crc kubenswrapper[4995]: I0126 23:23:49.474773 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:49 crc kubenswrapper[4995]: I0126 23:23:49.474905 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475181 4995 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475262 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:24:05.475235414 +0000 UTC m=+949.639942919 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "webhook-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475436 4995 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 23:23:49 crc kubenswrapper[4995]: E0126 23:23:49.475488 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs podName:03478ac9-bd6b-4726-86b4-cd29045b6dc0 nodeName:}" failed. No retries permitted until 2026-01-26 23:24:05.47547151 +0000 UTC m=+949.640179015 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs") pod "openstack-operator-controller-manager-58b6ccbf98-85h8w" (UID: "03478ac9-bd6b-4726-86b4-cd29045b6dc0") : secret "metrics-server-cert" not found Jan 26 23:23:54 crc kubenswrapper[4995]: I0126 23:23:54.496818 4995 scope.go:117] "RemoveContainer" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" Jan 26 23:23:54 crc kubenswrapper[4995]: E0126 23:23:54.766274 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 23:23:54 crc kubenswrapper[4995]: E0126 23:23:54.766902 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5mrhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-z899w_openstack-operators(1b364747-4f4c-4431-becf-0f2b30bc9d20): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:54 crc kubenswrapper[4995]: E0126 23:23:54.768288 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" podUID="1b364747-4f4c-4431-becf-0f2b30bc9d20" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.686899 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" podUID="1b364747-4f4c-4431-becf-0f2b30bc9d20" Jan 26 23:23:55 crc kubenswrapper[4995]: I0126 23:23:55.744647 4995 scope.go:117] "RemoveContainer" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.744878 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.745170 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j98vr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7f54b7d6d4-cf7gh_openstack-operators(4e9b965f-6060-43e7-aa1c-b73472075bae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:55 crc kubenswrapper[4995]: E0126 23:23:55.746518 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" podUID="4e9b965f-6060-43e7-aa1c-b73472075bae" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.197278 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:dbde47574a2204e5cb6af468e5c74df5124b1daab0ebcb0dc8c489fa40c8942f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" podUID="4e9b965f-6060-43e7-aa1c-b73472075bae" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.776202 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.776591 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fcldk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55f684fd56-gzjxj_openstack-operators(235cf5b2-2094-4345-bf37-edbcb2e5e48f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:23:56 crc kubenswrapper[4995]: E0126 23:23:56.777784 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" podUID="235cf5b2-2094-4345-bf37-edbcb2e5e48f" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.201617 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:008a2e338430e7dd513f81f66320cc5c1332c332a3191b537d75786489d7f487\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" podUID="235cf5b2-2094-4345-bf37-edbcb2e5e48f" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.297599 4995 scope.go:117] "RemoveContainer" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.298208 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca\": container with ID starting with acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca not found: ID does not exist" containerID="acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298258 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca"} err="failed to get container status \"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca\": rpc error: code = NotFound desc = could not find container \"acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca\": container with ID starting with acd575567b2d13d6dab71658bd3bcf4dbf2eaa595653c08f6368acb341d1f2ca not found: ID does not exist" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298286 4995 scope.go:117] "RemoveContainer" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.298631 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492\": container with ID starting with b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492 not found: ID does not exist" containerID="b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298678 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492"} err="failed to get container status \"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492\": rpc error: code = NotFound desc = could not find container \"b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492\": container with ID starting with b429480b34a131b83ca6998b403673e1430b3f2dfcd2cbcbcbb2331d51a13492 not found: ID does not exist" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.298693 4995 scope.go:117] "RemoveContainer" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" Jan 26 23:23:57 crc kubenswrapper[4995]: E0126 23:23:57.299078 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b\": container with ID starting with 34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b not found: ID does not exist" containerID="34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b" Jan 26 23:23:57 crc kubenswrapper[4995]: I0126 23:23:57.299187 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b"} err="failed to get container status \"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b\": rpc error: code = NotFound desc = could not find container \"34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b\": container with ID starting with 34a34420a154c1d9240f97a2edbdfcb52b4e797fa7d573158353903cb00f798b not found: ID does not exist" Jan 26 23:23:59 crc kubenswrapper[4995]: I0126 23:23:59.217339 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8"] Jan 26 23:23:59 crc kubenswrapper[4995]: W0126 23:23:59.295273 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a2f8d86_155b_476b_86c4_fda3eb595fc9.slice/crio-34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a WatchSource:0}: Error finding container 34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a: Status 404 returned error can't find the container with id 34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a Jan 26 23:23:59 crc kubenswrapper[4995]: I0126 23:23:59.318065 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.227244 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" event={"ID":"fd2183e6-a9e4-44b8-861f-9a545aac1c12","Type":"ContainerStarted","Data":"425dbb2c3ff281b3978c8bf23ba3e5786c56e311bf3329529153c59e822e731b"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.227602 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.231839 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" event={"ID":"bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e","Type":"ContainerStarted","Data":"71be7ba21ce684c877534dcea5ef377ec267934f459259af4c6bf20c0397963a"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.232427 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.238482 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" event={"ID":"a0641fd3-88a7-4fb2-93f9-ffce84aadef2","Type":"ContainerStarted","Data":"cb067666a02842c5d4943828660a0a26b3d91f9b5821ff29271d5897b9b01921"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.241749 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" event={"ID":"931ac40b-6695-41c7-9d8f-c8eefca6e587","Type":"ContainerStarted","Data":"fdcc4ccbdebd8ac253aa344175a4763b27dcb627ac664aa3c8f8028958bb1125"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.242350 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.244668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" event={"ID":"aba99191-8a3a-47dc-8dca-136de682a567","Type":"ContainerStarted","Data":"1c5c7c5db459d927b01a20361b3ee86a1d6b224ce124485c7ab9509c06d62b8e"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.244717 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.245924 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" event={"ID":"3a2f8d86-155b-476b-86c4-fda3eb595fc9","Type":"ContainerStarted","Data":"34d4803c446491cacb1f9bb67a30b9a52fd01d58ea3b281d5361fbbb25e2aa7a"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.252619 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" podStartSLOduration=4.254682458 podStartE2EDuration="28.252600938s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.225337949 +0000 UTC m=+918.390045414" lastFinishedPulling="2026-01-26 23:23:58.223256419 +0000 UTC m=+942.387963894" observedRunningTime="2026-01-26 23:24:00.249866159 +0000 UTC m=+944.414573624" watchObservedRunningTime="2026-01-26 23:24:00.252600938 +0000 UTC m=+944.417308403" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.253012 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" event={"ID":"e29f1042-97e4-430c-a262-53ab3cca40d9","Type":"ContainerStarted","Data":"1d759865728d2d43489a6b8aa2a9ac9172e08cf025c770244e92e89555fd5b42"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.253774 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.265631 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" event={"ID":"fd5d672d-1c27-4782-bbf3-c6d936a8c9bb","Type":"ContainerStarted","Data":"aa4bf00a0d56dd4eab33900595ef5aeec8fbf6242964bb00d63e0e7cbb2562ea"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.267207 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.268589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" event={"ID":"ce22ba19-581c-4f75-9bd6-4de0538779a2","Type":"ContainerStarted","Data":"72e2686d39e2a1ece6d8f29eaca9ebf8dae5b716d153699fd40c773f18e2d2a7"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.269240 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.276540 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" event={"ID":"b60b13f0-97c0-42b9-85fd-2a51218c9ac1","Type":"ContainerStarted","Data":"9a66e2284757cf521b97d7874fc95404fd3fee234318aeedcdb072ee42001524"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.277235 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.286430 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" event={"ID":"70dc0d96-2ba1-487e-8ffc-a98725e002c4","Type":"ContainerStarted","Data":"efcba479dd7725c611825d529a745734bc83a8c6325625e9a2bddb1efeedf2d2"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.287269 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.288664 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-dk2dl" podStartSLOduration=2.634415452 podStartE2EDuration="27.288645868s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.751136788 +0000 UTC m=+918.915844253" lastFinishedPulling="2026-01-26 23:23:59.405367194 +0000 UTC m=+943.570074669" observedRunningTime="2026-01-26 23:24:00.283445038 +0000 UTC m=+944.448152573" watchObservedRunningTime="2026-01-26 23:24:00.288645868 +0000 UTC m=+944.453353333" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.297806 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" event={"ID":"4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6","Type":"ContainerStarted","Data":"081a100c455622cc35271500428dc1e1afec1f13946dc0158d093f410b942e32"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.299067 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.306345 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" event={"ID":"c5dd6b1a-1515-4ad6-b89e-0c7253a71281","Type":"ContainerStarted","Data":"96f5f7e5688b748f0863bcf6fb5e65e092c9d02536e07f285aaa4b712c9b5f1e"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.306414 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.312800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerStarted","Data":"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.313568 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.313578 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" podStartSLOduration=6.533772363 podStartE2EDuration="28.31355794s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.949183214 +0000 UTC m=+918.113890679" lastFinishedPulling="2026-01-26 23:23:55.728968751 +0000 UTC m=+939.893676256" observedRunningTime="2026-01-26 23:24:00.31035306 +0000 UTC m=+944.475060525" watchObservedRunningTime="2026-01-26 23:24:00.31355794 +0000 UTC m=+944.478265425" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.351386 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" event={"ID":"555394ee-9ad5-417f-9698-646ba1ddc5f2","Type":"ContainerStarted","Data":"d8f2d6dd22673cc2e070ae43e6acd84d5ab84a281993d9771e53d72dfced9a48"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.351519 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.370662 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" event={"ID":"0d39c5fc-e526-46e8-8773-6bf87e938b06","Type":"ContainerStarted","Data":"903e197ff8ff455daea89f3cda21bd1e7b2437e42ac288cd994ef09b1ddc84a8"} Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.371716 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.393031 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" podStartSLOduration=7.207647078 podStartE2EDuration="28.393006893s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.559033331 +0000 UTC m=+918.723740806" lastFinishedPulling="2026-01-26 23:23:55.744393156 +0000 UTC m=+939.909100621" observedRunningTime="2026-01-26 23:24:00.376305426 +0000 UTC m=+944.541012881" watchObservedRunningTime="2026-01-26 23:24:00.393006893 +0000 UTC m=+944.557714358" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.445786 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" podStartSLOduration=4.474884486 podStartE2EDuration="28.445771631s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.797650189 +0000 UTC m=+918.962357654" lastFinishedPulling="2026-01-26 23:23:58.768537314 +0000 UTC m=+942.933244799" observedRunningTime="2026-01-26 23:24:00.417362652 +0000 UTC m=+944.582070117" watchObservedRunningTime="2026-01-26 23:24:00.445771631 +0000 UTC m=+944.610479096" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.446495 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podStartSLOduration=2.657457457 podStartE2EDuration="27.446491349s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.584314253 +0000 UTC m=+918.749021718" lastFinishedPulling="2026-01-26 23:23:59.373348125 +0000 UTC m=+943.538055610" observedRunningTime="2026-01-26 23:24:00.439072244 +0000 UTC m=+944.603779709" watchObservedRunningTime="2026-01-26 23:24:00.446491349 +0000 UTC m=+944.611198814" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.463624 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" podStartSLOduration=3.783283028 podStartE2EDuration="28.463607956s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.542935601 +0000 UTC m=+917.707643066" lastFinishedPulling="2026-01-26 23:23:58.223260519 +0000 UTC m=+942.387967994" observedRunningTime="2026-01-26 23:24:00.461134085 +0000 UTC m=+944.625841550" watchObservedRunningTime="2026-01-26 23:24:00.463607956 +0000 UTC m=+944.628315421" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.488278 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" podStartSLOduration=5.554220746 podStartE2EDuration="28.488263982s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.864009247 +0000 UTC m=+918.028716712" lastFinishedPulling="2026-01-26 23:23:56.798052483 +0000 UTC m=+940.962759948" observedRunningTime="2026-01-26 23:24:00.487497673 +0000 UTC m=+944.652205138" watchObservedRunningTime="2026-01-26 23:24:00.488263982 +0000 UTC m=+944.652971437" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.550803 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" podStartSLOduration=5.281424324 podStartE2EDuration="27.550783243s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.544990181 +0000 UTC m=+918.709697646" lastFinishedPulling="2026-01-26 23:23:56.8143491 +0000 UTC m=+940.979056565" observedRunningTime="2026-01-26 23:24:00.520501197 +0000 UTC m=+944.685208672" watchObservedRunningTime="2026-01-26 23:24:00.550783243 +0000 UTC m=+944.715490708" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.551268 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" podStartSLOduration=6.645152225 podStartE2EDuration="28.551263395s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.8228555 +0000 UTC m=+917.987562965" lastFinishedPulling="2026-01-26 23:23:55.72896663 +0000 UTC m=+939.893674135" observedRunningTime="2026-01-26 23:24:00.54947934 +0000 UTC m=+944.714186805" watchObservedRunningTime="2026-01-26 23:24:00.551263395 +0000 UTC m=+944.715970850" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.569518 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" podStartSLOduration=3.960331699 podStartE2EDuration="28.56949698s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.722731029 +0000 UTC m=+918.887438494" lastFinishedPulling="2026-01-26 23:23:59.33189631 +0000 UTC m=+943.496603775" observedRunningTime="2026-01-26 23:24:00.569036799 +0000 UTC m=+944.733744264" watchObservedRunningTime="2026-01-26 23:24:00.56949698 +0000 UTC m=+944.734204435" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.593679 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" podStartSLOduration=5.393549043 podStartE2EDuration="27.593655943s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.551598506 +0000 UTC m=+918.716305991" lastFinishedPulling="2026-01-26 23:23:56.751705416 +0000 UTC m=+940.916412891" observedRunningTime="2026-01-26 23:24:00.592195437 +0000 UTC m=+944.756902902" watchObservedRunningTime="2026-01-26 23:24:00.593655943 +0000 UTC m=+944.758363408" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.617449 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" podStartSLOduration=5.896719138 podStartE2EDuration="28.617434277s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.552969 +0000 UTC m=+918.717676485" lastFinishedPulling="2026-01-26 23:23:57.273684159 +0000 UTC m=+941.438391624" observedRunningTime="2026-01-26 23:24:00.61394886 +0000 UTC m=+944.778656325" watchObservedRunningTime="2026-01-26 23:24:00.617434277 +0000 UTC m=+944.782141742" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.648491 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" podStartSLOduration=4.056758377 podStartE2EDuration="28.648469392s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.631572054 +0000 UTC m=+917.796279519" lastFinishedPulling="2026-01-26 23:23:58.223283029 +0000 UTC m=+942.387990534" observedRunningTime="2026-01-26 23:24:00.642456582 +0000 UTC m=+944.807164057" watchObservedRunningTime="2026-01-26 23:24:00.648469392 +0000 UTC m=+944.813176857" Jan 26 23:24:00 crc kubenswrapper[4995]: I0126 23:24:00.658911 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" podStartSLOduration=6.026679813 podStartE2EDuration="28.658889892s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.119471486 +0000 UTC m=+918.284178951" lastFinishedPulling="2026-01-26 23:23:56.751681575 +0000 UTC m=+940.916389030" observedRunningTime="2026-01-26 23:24:00.654874942 +0000 UTC m=+944.819582417" watchObservedRunningTime="2026-01-26 23:24:00.658889892 +0000 UTC m=+944.823597357" Jan 26 23:24:02 crc kubenswrapper[4995]: I0126 23:24:02.407359 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" event={"ID":"3a2f8d86-155b-476b-86c4-fda3eb595fc9","Type":"ContainerStarted","Data":"f41c23d61ae1f0fec8db535bfb06f831fec0d579f1223b081dc6ef72a37caf74"} Jan 26 23:24:02 crc kubenswrapper[4995]: I0126 23:24:02.407753 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:24:02 crc kubenswrapper[4995]: I0126 23:24:02.429876 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" podStartSLOduration=27.550242322 podStartE2EDuration="30.429852411s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:59.317846079 +0000 UTC m=+943.482553534" lastFinishedPulling="2026-01-26 23:24:02.197456138 +0000 UTC m=+946.362163623" observedRunningTime="2026-01-26 23:24:02.428397044 +0000 UTC m=+946.593104529" watchObservedRunningTime="2026-01-26 23:24:02.429852411 +0000 UTC m=+946.594559886" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.416232 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" event={"ID":"90ae2b4f-43e9-4a37-abc5-d90e958e540b","Type":"ContainerStarted","Data":"9f75de979e8debd0f0408e92f6a297458aaa5b5f5265f358f0c3397dd841b7d0"} Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.416694 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.421093 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" event={"ID":"03047106-c820-43c2-bee1-c8b1fb3a0a0c","Type":"ContainerStarted","Data":"85b5dcfecfe8e67bc59258e4b14441561267b335d9b5e6a7cc95340b245dc49f"} Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.422006 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.440597 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" podStartSLOduration=2.294670231 podStartE2EDuration="31.440574457s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:33.863614088 +0000 UTC m=+918.028321553" lastFinishedPulling="2026-01-26 23:24:03.009518284 +0000 UTC m=+947.174225779" observedRunningTime="2026-01-26 23:24:03.435943392 +0000 UTC m=+947.600650867" watchObservedRunningTime="2026-01-26 23:24:03.440574457 +0000 UTC m=+947.605281942" Jan 26 23:24:03 crc kubenswrapper[4995]: I0126 23:24:03.454348 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" podStartSLOduration=2.664648218 podStartE2EDuration="31.454326841s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.266191379 +0000 UTC m=+918.430898844" lastFinishedPulling="2026-01-26 23:24:03.055870002 +0000 UTC m=+947.220577467" observedRunningTime="2026-01-26 23:24:03.452869254 +0000 UTC m=+947.617576719" watchObservedRunningTime="2026-01-26 23:24:03.454326841 +0000 UTC m=+947.619034306" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.015557 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.023160 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cfbd9d32-25ae-4369-8e16-ce174c0802dc-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b85484lj5\" (UID: \"cfbd9d32-25ae-4369-8e16-ce174c0802dc\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.114707 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.523446 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.523509 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.527831 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-webhook-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.529975 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03478ac9-bd6b-4726-86b4-cd29045b6dc0-metrics-certs\") pod \"openstack-operator-controller-manager-58b6ccbf98-85h8w\" (UID: \"03478ac9-bd6b-4726-86b4-cd29045b6dc0\") " pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:05 crc kubenswrapper[4995]: W0126 23:24:05.616309 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcfbd9d32_25ae_4369_8e16_ce174c0802dc.slice/crio-2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9 WatchSource:0}: Error finding container 2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9: Status 404 returned error can't find the container with id 2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9 Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.618360 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5"] Jan 26 23:24:05 crc kubenswrapper[4995]: I0126 23:24:05.689022 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.102340 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w"] Jan 26 23:24:06 crc kubenswrapper[4995]: W0126 23:24:06.114295 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03478ac9_bd6b_4726_86b4_cd29045b6dc0.slice/crio-4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83 WatchSource:0}: Error finding container 4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83: Status 404 returned error can't find the container with id 4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83 Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.448767 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" event={"ID":"03478ac9-bd6b-4726-86b4-cd29045b6dc0","Type":"ContainerStarted","Data":"ea13fe15f31261aea83c9994356212607c23ac772413c7990e6e12e9593a33f5"} Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.448822 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" event={"ID":"03478ac9-bd6b-4726-86b4-cd29045b6dc0","Type":"ContainerStarted","Data":"4b7c0e543fb19c676d1f3b28adaec5b12ecdde987db27482bb460718724faf83"} Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.448889 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.456836 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" event={"ID":"cfbd9d32-25ae-4369-8e16-ce174c0802dc","Type":"ContainerStarted","Data":"2c0bc0b6cb3349dbcedf20e7d5c9df686e4ef75a5b266a9b13265ea0b606f3c9"} Jan 26 23:24:06 crc kubenswrapper[4995]: I0126 23:24:06.489399 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" podStartSLOduration=33.489380491 podStartE2EDuration="33.489380491s" podCreationTimestamp="2026-01-26 23:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:24:06.476976592 +0000 UTC m=+950.641684107" watchObservedRunningTime="2026-01-26 23:24:06.489380491 +0000 UTC m=+950.654087966" Jan 26 23:24:07 crc kubenswrapper[4995]: I0126 23:24:07.467434 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" event={"ID":"cfbd9d32-25ae-4369-8e16-ce174c0802dc","Type":"ContainerStarted","Data":"0a07e996bfe8eb6e549bbef3228645b8f548ed7f17e47a454d069d3ad3102d1f"} Jan 26 23:24:07 crc kubenswrapper[4995]: I0126 23:24:07.467822 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:07 crc kubenswrapper[4995]: I0126 23:24:07.503027 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" podStartSLOduration=33.818222973 podStartE2EDuration="35.503005779s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:24:05.618779183 +0000 UTC m=+949.783486658" lastFinishedPulling="2026-01-26 23:24:07.303561999 +0000 UTC m=+951.468269464" observedRunningTime="2026-01-26 23:24:07.496766793 +0000 UTC m=+951.661474298" watchObservedRunningTime="2026-01-26 23:24:07.503005779 +0000 UTC m=+951.667713254" Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.486585 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" event={"ID":"1b364747-4f4c-4431-becf-0f2b30bc9d20","Type":"ContainerStarted","Data":"8795fa5817cae12cba287456f3746c00542fb636d8283b306437be23e5d4b3f2"} Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.487450 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.518634 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" podStartSLOduration=3.076874381 podStartE2EDuration="36.518612147s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.745822215 +0000 UTC m=+918.910529680" lastFinishedPulling="2026-01-26 23:24:08.187559961 +0000 UTC m=+952.352267446" observedRunningTime="2026-01-26 23:24:08.512828013 +0000 UTC m=+952.677535498" watchObservedRunningTime="2026-01-26 23:24:08.518612147 +0000 UTC m=+952.683319632" Jan 26 23:24:08 crc kubenswrapper[4995]: I0126 23:24:08.860157 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d75bc88d5-n9dc8" Jan 26 23:24:11 crc kubenswrapper[4995]: I0126 23:24:11.513607 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" event={"ID":"235cf5b2-2094-4345-bf37-edbcb2e5e48f","Type":"ContainerStarted","Data":"448fc0f7f81217363e8187a42cadcc3c795455cd3496b1f04f53e5e39c9dabf1"} Jan 26 23:24:11 crc kubenswrapper[4995]: I0126 23:24:11.514163 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:24:11 crc kubenswrapper[4995]: I0126 23:24:11.538675 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" podStartSLOduration=2.88497353 podStartE2EDuration="39.538656723s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.267246266 +0000 UTC m=+918.431953731" lastFinishedPulling="2026-01-26 23:24:10.920929459 +0000 UTC m=+955.085636924" observedRunningTime="2026-01-26 23:24:11.534989312 +0000 UTC m=+955.699696817" watchObservedRunningTime="2026-01-26 23:24:11.538656723 +0000 UTC m=+955.703364188" Jan 26 23:24:12 crc kubenswrapper[4995]: I0126 23:24:12.539725 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" event={"ID":"4e9b965f-6060-43e7-aa1c-b73472075bae","Type":"ContainerStarted","Data":"025a8a00710599d955c5d8a5e3ef6c315173a0c80d3c0a044f612e6f1b93a08f"} Jan 26 23:24:12 crc kubenswrapper[4995]: I0126 23:24:12.539938 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:24:12 crc kubenswrapper[4995]: I0126 23:24:12.569269 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" podStartSLOduration=3.127394423 podStartE2EDuration="40.569247696s" podCreationTimestamp="2026-01-26 23:23:32 +0000 UTC" firstStartedPulling="2026-01-26 23:23:34.54496506 +0000 UTC m=+918.709672535" lastFinishedPulling="2026-01-26 23:24:11.986818323 +0000 UTC m=+956.151525808" observedRunningTime="2026-01-26 23:24:12.561841941 +0000 UTC m=+956.726549436" watchObservedRunningTime="2026-01-26 23:24:12.569247696 +0000 UTC m=+956.733955171" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.023220 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-655bf9cfbb-pzzq9" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.041632 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-77554cdc5c-kgv2f" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.072951 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-67dd55ff59-gdvdp" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.133049 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-954b94f75-7q5kj" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.218597 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-r7mgm" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.304147 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6987f66698-x2fg8" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.456750 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-768b776ffb-6gtf9" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.499457 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-849fcfbb6b-w2gfg" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.510979 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.538687 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7ffd8d76d4-p47jp" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.581625 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-756f86fc74-7s666" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.602083 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-z899w" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.634224 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-5zhml" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.666816 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-b4kzb" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.772536 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-799bc87c89-bmdgt" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.837601 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-kjmpf" Jan 26 23:24:13 crc kubenswrapper[4995]: I0126 23:24:13.863160 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:15 crc kubenswrapper[4995]: I0126 23:24:15.124927 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b85484lj5" Jan 26 23:24:15 crc kubenswrapper[4995]: I0126 23:24:15.699490 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-58b6ccbf98-85h8w" Jan 26 23:24:23 crc kubenswrapper[4995]: I0126 23:24:23.478202 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55f684fd56-gzjxj" Jan 26 23:24:23 crc kubenswrapper[4995]: I0126 23:24:23.560520 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7f54b7d6d4-cf7gh" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.009308 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.010435 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" containerID="cri-o://385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" gracePeriod=10 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.065378 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.065765 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" containerID="cri-o://6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114" gracePeriod=10 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.459989 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.599555 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") pod \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\" (UID: \"e28ba494-e3ae-4294-8018-e9b8d7a1f96a\") " Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.606323 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp" (OuterVolumeSpecName: "kube-api-access-fjxhp") pod "e28ba494-e3ae-4294-8018-e9b8d7a1f96a" (UID: "e28ba494-e3ae-4294-8018-e9b8d7a1f96a"). InnerVolumeSpecName "kube-api-access-fjxhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691597 4995 generic.go:334] "Generic (PLEG): container finished" podID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" exitCode=0 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691711 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerDied","Data":"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38"} Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" event={"ID":"e28ba494-e3ae-4294-8018-e9b8d7a1f96a","Type":"ContainerDied","Data":"8fc64f2241602e5430031603131579ddfcb71635f56628aa31eac33e8190f64f"} Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691752 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.691782 4995 scope.go:117] "RemoveContainer" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.694037 4995 generic.go:334] "Generic (PLEG): container finished" podID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerID="6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114" exitCode=0 Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.694082 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerDied","Data":"6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114"} Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.701143 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjxhp\" (UniqueName: \"kubernetes.io/projected/e28ba494-e3ae-4294-8018-e9b8d7a1f96a-kube-api-access-fjxhp\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.728443 4995 scope.go:117] "RemoveContainer" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" Jan 26 23:24:28 crc kubenswrapper[4995]: E0126 23:24:28.733031 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38\": container with ID starting with 385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38 not found: ID does not exist" containerID="385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.733351 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38"} err="failed to get container status \"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38\": rpc error: code = NotFound desc = could not find container \"385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38\": container with ID starting with 385236065a9b25739d48681be3be09b567acf44315b68ebfe6141a62502d4c38 not found: ID does not exist" Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.736811 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.747166 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-7b8f755c7-tlv6g"] Jan 26 23:24:28 crc kubenswrapper[4995]: I0126 23:24:28.959959 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.004583 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") pod \"892f33f6-3409-407d-b85b-922b8bdbfa16\" (UID: \"892f33f6-3409-407d-b85b-922b8bdbfa16\") " Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.010293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22" (OuterVolumeSpecName: "kube-api-access-f4k22") pod "892f33f6-3409-407d-b85b-922b8bdbfa16" (UID: "892f33f6-3409-407d-b85b-922b8bdbfa16"). InnerVolumeSpecName "kube-api-access-f4k22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.106464 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4k22\" (UniqueName: \"kubernetes.io/projected/892f33f6-3409-407d-b85b-922b8bdbfa16-kube-api-access-f4k22\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.701355 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" event={"ID":"892f33f6-3409-407d-b85b-922b8bdbfa16","Type":"ContainerDied","Data":"c39384df979e6337b8f9a32ef86a0cb2526573842d84866ed04f1ff9dcd951b0"} Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.701661 4995 scope.go:117] "RemoveContainer" containerID="6a5755d8b4f8e8fbc12a9584a063252b6234f0b1c979feb6127b8e6060aa5114" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.701388 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp" Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.971318 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:24:29 crc kubenswrapper[4995]: I0126 23:24:29.982275 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-f8d7d87cb-d4ktp"] Jan 26 23:24:30 crc kubenswrapper[4995]: I0126 23:24:30.525037 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" path="/var/lib/kubelet/pods/892f33f6-3409-407d-b85b-922b8bdbfa16/volumes" Jan 26 23:24:30 crc kubenswrapper[4995]: I0126 23:24:30.525515 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" path="/var/lib/kubelet/pods/e28ba494-e3ae-4294-8018-e9b8d7a1f96a/volumes" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.118651 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119459 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119483 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119515 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119529 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119544 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119558 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119588 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-content" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119601 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-content" Jan 26 23:24:32 crc kubenswrapper[4995]: E0126 23:24:32.119628 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-utilities" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119641 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="extract-utilities" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119923 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28ba494-e3ae-4294-8018-e9b8d7a1f96a" containerName="manager" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119947 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="892f33f6-3409-407d-b85b-922b8bdbfa16" containerName="operator" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.119978 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb8f3318-4432-4877-9e0c-1ae39d3a849e" containerName="registry-server" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.120807 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.125306 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-index-dockercfg-rl7b8" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.136753 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.246095 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"watcher-operator-index-6vpbz\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.347780 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"watcher-operator-index-6vpbz\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.401899 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"watcher-operator-index-6vpbz\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:32 crc kubenswrapper[4995]: I0126 23:24:32.445017 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:33 crc kubenswrapper[4995]: I0126 23:24:33.187659 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:33 crc kubenswrapper[4995]: W0126 23:24:33.203295 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7c97a9e_e3f1_441b_b4f8_6e15bfb926e0.slice/crio-9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65 WatchSource:0}: Error finding container 9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65: Status 404 returned error can't find the container with id 9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65 Jan 26 23:24:33 crc kubenswrapper[4995]: I0126 23:24:33.729838 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerStarted","Data":"9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65"} Jan 26 23:24:34 crc kubenswrapper[4995]: I0126 23:24:34.741727 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerStarted","Data":"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06"} Jan 26 23:24:34 crc kubenswrapper[4995]: I0126 23:24:34.768687 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-6vpbz" podStartSLOduration=1.734916026 podStartE2EDuration="2.768652805s" podCreationTimestamp="2026-01-26 23:24:32 +0000 UTC" firstStartedPulling="2026-01-26 23:24:33.204873422 +0000 UTC m=+977.369580887" lastFinishedPulling="2026-01-26 23:24:34.238610211 +0000 UTC m=+978.403317666" observedRunningTime="2026-01-26 23:24:34.759434465 +0000 UTC m=+978.924141980" watchObservedRunningTime="2026-01-26 23:24:34.768652805 +0000 UTC m=+978.933360310" Jan 26 23:24:35 crc kubenswrapper[4995]: I0126 23:24:35.707682 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.312119 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-index-k8w76"] Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.313460 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.323765 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-k8w76"] Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.423660 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mck\" (UniqueName: \"kubernetes.io/projected/fea9da97-72c6-4b3a-a479-1566d93b3a22-kube-api-access-q6mck\") pod \"watcher-operator-index-k8w76\" (UID: \"fea9da97-72c6-4b3a-a479-1566d93b3a22\") " pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.524555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mck\" (UniqueName: \"kubernetes.io/projected/fea9da97-72c6-4b3a-a479-1566d93b3a22-kube-api-access-q6mck\") pod \"watcher-operator-index-k8w76\" (UID: \"fea9da97-72c6-4b3a-a479-1566d93b3a22\") " pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.569760 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mck\" (UniqueName: \"kubernetes.io/projected/fea9da97-72c6-4b3a-a479-1566d93b3a22-kube-api-access-q6mck\") pod \"watcher-operator-index-k8w76\" (UID: \"fea9da97-72c6-4b3a-a479-1566d93b3a22\") " pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.680887 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.764239 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-index-6vpbz" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" containerID="cri-o://3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" gracePeriod=2 Jan 26 23:24:36 crc kubenswrapper[4995]: I0126 23:24:36.974350 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-index-k8w76"] Jan 26 23:24:36 crc kubenswrapper[4995]: W0126 23:24:36.975017 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfea9da97_72c6_4b3a_a479_1566d93b3a22.slice/crio-d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e WatchSource:0}: Error finding container d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e: Status 404 returned error can't find the container with id d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.414209 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.576715 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") pod \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\" (UID: \"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0\") " Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.583601 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc" (OuterVolumeSpecName: "kube-api-access-kxplc") pod "a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" (UID: "a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0"). InnerVolumeSpecName "kube-api-access-kxplc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.680497 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxplc\" (UniqueName: \"kubernetes.io/projected/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0-kube-api-access-kxplc\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.776568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-k8w76" event={"ID":"fea9da97-72c6-4b3a-a479-1566d93b3a22","Type":"ContainerStarted","Data":"2597e92cbdadb7e020ded468fe9a531871ff88ba827a9710cfa848692d8bee48"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.776622 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-k8w76" event={"ID":"fea9da97-72c6-4b3a-a479-1566d93b3a22","Type":"ContainerStarted","Data":"d8085d3dca346c2e1f1db8eca659feb2b704fe9e8b729d16ce5c25d230b25f4e"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.782978 4995 generic.go:334] "Generic (PLEG): container finished" podID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" exitCode=0 Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783040 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-index-6vpbz" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783045 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerDied","Data":"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783138 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-index-6vpbz" event={"ID":"a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0","Type":"ContainerDied","Data":"9a1a02e89a108100f1c228222c2578d01a13c32f4e252341fd792022c7047b65"} Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.783172 4995 scope.go:117] "RemoveContainer" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.812714 4995 scope.go:117] "RemoveContainer" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" Jan 26 23:24:37 crc kubenswrapper[4995]: E0126 23:24:37.813748 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06\": container with ID starting with 3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06 not found: ID does not exist" containerID="3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.813838 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06"} err="failed to get container status \"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06\": rpc error: code = NotFound desc = could not find container \"3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06\": container with ID starting with 3c2fb2577be26aa356b78e1a4d421edbc759a201d87cdb7d72bf2ecaf619bd06 not found: ID does not exist" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.815626 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-index-k8w76" podStartSLOduration=1.316272131 podStartE2EDuration="1.815616228s" podCreationTimestamp="2026-01-26 23:24:36 +0000 UTC" firstStartedPulling="2026-01-26 23:24:36.979037821 +0000 UTC m=+981.143745286" lastFinishedPulling="2026-01-26 23:24:37.478381918 +0000 UTC m=+981.643089383" observedRunningTime="2026-01-26 23:24:37.799989258 +0000 UTC m=+981.964696743" watchObservedRunningTime="2026-01-26 23:24:37.815616228 +0000 UTC m=+981.980323693" Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.819809 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:37 crc kubenswrapper[4995]: I0126 23:24:37.824930 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-index-6vpbz"] Jan 26 23:24:38 crc kubenswrapper[4995]: I0126 23:24:38.545036 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" path="/var/lib/kubelet/pods/a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0/volumes" Jan 26 23:24:40 crc kubenswrapper[4995]: I0126 23:24:40.893231 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:24:40 crc kubenswrapper[4995]: I0126 23:24:40.893681 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.682140 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.683353 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.721557 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:46 crc kubenswrapper[4995]: I0126 23:24:46.881672 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-index-k8w76" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.758399 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn"] Jan 26 23:24:49 crc kubenswrapper[4995]: E0126 23:24:49.758968 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.758982 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.759183 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c97a9e-e3f1-441b-b4f8-6e15bfb926e0" containerName="registry-server" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.760134 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.764520 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-jtm6l" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.777681 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn"] Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.865259 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.865340 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.865488 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.966543 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.966703 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.966745 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.967636 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.967802 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:49 crc kubenswrapper[4995]: I0126 23:24:49.992951 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:50 crc kubenswrapper[4995]: I0126 23:24:50.130157 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:50 crc kubenswrapper[4995]: I0126 23:24:50.418147 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn"] Jan 26 23:24:50 crc kubenswrapper[4995]: W0126 23:24:50.762334 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c23b438_d384_46e6_8c88_6703c70fccea.slice/crio-4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830 WatchSource:0}: Error finding container 4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830: Status 404 returned error can't find the container with id 4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830 Jan 26 23:24:50 crc kubenswrapper[4995]: I0126 23:24:50.888334 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerStarted","Data":"4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830"} Jan 26 23:24:51 crc kubenswrapper[4995]: I0126 23:24:51.898053 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c23b438-d384-46e6-8c88-6703c70fccea" containerID="8f24a68134ac2281dde9b35e9f503389a667afb82ea81b48563498762a961994" exitCode=0 Jan 26 23:24:51 crc kubenswrapper[4995]: I0126 23:24:51.898179 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"8f24a68134ac2281dde9b35e9f503389a667afb82ea81b48563498762a961994"} Jan 26 23:24:52 crc kubenswrapper[4995]: I0126 23:24:52.911324 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c23b438-d384-46e6-8c88-6703c70fccea" containerID="defe6506de0e57309828b36b87e983c4ec156df4d4956d49d988713262c93c71" exitCode=0 Jan 26 23:24:52 crc kubenswrapper[4995]: I0126 23:24:52.911432 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"defe6506de0e57309828b36b87e983c4ec156df4d4956d49d988713262c93c71"} Jan 26 23:24:53 crc kubenswrapper[4995]: I0126 23:24:53.927520 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c23b438-d384-46e6-8c88-6703c70fccea" containerID="47b8ed82eaf50103c1712153fb983eadad78e1f7233544e73be686e857f0dfaa" exitCode=0 Jan 26 23:24:53 crc kubenswrapper[4995]: I0126 23:24:53.927636 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"47b8ed82eaf50103c1712153fb983eadad78e1f7233544e73be686e857f0dfaa"} Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.261146 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.342825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") pod \"5c23b438-d384-46e6-8c88-6703c70fccea\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.342928 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") pod \"5c23b438-d384-46e6-8c88-6703c70fccea\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.343006 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") pod \"5c23b438-d384-46e6-8c88-6703c70fccea\" (UID: \"5c23b438-d384-46e6-8c88-6703c70fccea\") " Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.344802 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle" (OuterVolumeSpecName: "bundle") pod "5c23b438-d384-46e6-8c88-6703c70fccea" (UID: "5c23b438-d384-46e6-8c88-6703c70fccea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.360147 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k" (OuterVolumeSpecName: "kube-api-access-xjb9k") pod "5c23b438-d384-46e6-8c88-6703c70fccea" (UID: "5c23b438-d384-46e6-8c88-6703c70fccea"). InnerVolumeSpecName "kube-api-access-xjb9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.364090 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util" (OuterVolumeSpecName: "util") pod "5c23b438-d384-46e6-8c88-6703c70fccea" (UID: "5c23b438-d384-46e6-8c88-6703c70fccea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.445613 4995 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.445673 4995 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c23b438-d384-46e6-8c88-6703c70fccea-util\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.445694 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjb9k\" (UniqueName: \"kubernetes.io/projected/5c23b438-d384-46e6-8c88-6703c70fccea-kube-api-access-xjb9k\") on node \"crc\" DevicePath \"\"" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.949189 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" event={"ID":"5c23b438-d384-46e6-8c88-6703c70fccea","Type":"ContainerDied","Data":"4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830"} Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.949247 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cd6bed35c8568e1a659223b1d9f5a083785483cdb0211678b799f6465ac6830" Jan 26 23:24:55 crc kubenswrapper[4995]: I0126 23:24:55.949304 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.248612 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:01 crc kubenswrapper[4995]: E0126 23:25:01.249119 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="pull" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249131 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="pull" Jan 26 23:25:01 crc kubenswrapper[4995]: E0126 23:25:01.249148 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="extract" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249154 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="extract" Jan 26 23:25:01 crc kubenswrapper[4995]: E0126 23:25:01.249166 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="util" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249171 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="util" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249297 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c23b438-d384-46e6-8c88-6703c70fccea" containerName="extract" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.249715 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.251273 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-service-cert" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.251437 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-fw4c6" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.264645 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.340031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.340092 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.340231 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.441840 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.441937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.441993 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.451138 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.458772 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.465798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"watcher-operator-controller-manager-fc744d67-kb4r5\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.568483 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.775921 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.776997 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.784016 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.840926 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.855021 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-webhook-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.855217 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9cws\" (UniqueName: \"kubernetes.io/projected/001f4541-5731-4423-9cf7-f2c339b975b1-kube-api-access-j9cws\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.855348 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-apiservice-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.956703 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9cws\" (UniqueName: \"kubernetes.io/projected/001f4541-5731-4423-9cf7-f2c339b975b1-kube-api-access-j9cws\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.956783 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-apiservice-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.956825 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-webhook-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.961826 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-webhook-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.963758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/001f4541-5731-4423-9cf7-f2c339b975b1-apiservice-cert\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:01 crc kubenswrapper[4995]: I0126 23:25:01.987371 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9cws\" (UniqueName: \"kubernetes.io/projected/001f4541-5731-4423-9cf7-f2c339b975b1-kube-api-access-j9cws\") pod \"watcher-operator-controller-manager-69796cd4f7-2jmll\" (UID: \"001f4541-5731-4423-9cf7-f2c339b975b1\") " pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:02 crc kubenswrapper[4995]: I0126 23:25:02.005233 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerStarted","Data":"9fb2cf74c6bd172d22c4db04ecf00ee3e66d50a61ad6ab006b596012477e9423"} Jan 26 23:25:02 crc kubenswrapper[4995]: I0126 23:25:02.106974 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:02 crc kubenswrapper[4995]: I0126 23:25:02.616615 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll"] Jan 26 23:25:02 crc kubenswrapper[4995]: W0126 23:25:02.636151 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod001f4541_5731_4423_9cf7_f2c339b975b1.slice/crio-a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b WatchSource:0}: Error finding container a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b: Status 404 returned error can't find the container with id a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.013133 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerStarted","Data":"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa"} Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.013598 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.016476 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" event={"ID":"001f4541-5731-4423-9cf7-f2c339b975b1","Type":"ContainerStarted","Data":"f261cde3f1f6fc54e192c076848912f28f7f301be79adb6fff5a64364694abb7"} Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.016528 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" event={"ID":"001f4541-5731-4423-9cf7-f2c339b975b1","Type":"ContainerStarted","Data":"a38961f4942880e34f943cd8803b14aeb57de984891af0a1d39f5afaea47785b"} Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.016660 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.042038 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" podStartSLOduration=2.042018352 podStartE2EDuration="2.042018352s" podCreationTimestamp="2026-01-26 23:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:25:03.037210662 +0000 UTC m=+1007.201918127" watchObservedRunningTime="2026-01-26 23:25:03.042018352 +0000 UTC m=+1007.206725807" Jan 26 23:25:03 crc kubenswrapper[4995]: I0126 23:25:03.056952 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" podStartSLOduration=2.056931774 podStartE2EDuration="2.056931774s" podCreationTimestamp="2026-01-26 23:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:25:03.051904359 +0000 UTC m=+1007.216611824" watchObservedRunningTime="2026-01-26 23:25:03.056931774 +0000 UTC m=+1007.221639239" Jan 26 23:25:10 crc kubenswrapper[4995]: I0126 23:25:10.894196 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:25:10 crc kubenswrapper[4995]: I0126 23:25:10.894943 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:25:11 crc kubenswrapper[4995]: I0126 23:25:11.574732 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.111738 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-69796cd4f7-2jmll" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.168044 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.174421 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" containerID="cri-o://3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" gracePeriod=10 Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.585290 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.760204 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") pod \"5c59a309-5169-4591-9059-414f361ef107\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.760375 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") pod \"5c59a309-5169-4591-9059-414f361ef107\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.760432 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") pod \"5c59a309-5169-4591-9059-414f361ef107\" (UID: \"5c59a309-5169-4591-9059-414f361ef107\") " Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.765134 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5c59a309-5169-4591-9059-414f361ef107" (UID: "5c59a309-5169-4591-9059-414f361ef107"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.765376 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr" (OuterVolumeSpecName: "kube-api-access-bwdgr") pod "5c59a309-5169-4591-9059-414f361ef107" (UID: "5c59a309-5169-4591-9059-414f361ef107"). InnerVolumeSpecName "kube-api-access-bwdgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.766759 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "5c59a309-5169-4591-9059-414f361ef107" (UID: "5c59a309-5169-4591-9059-414f361ef107"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.862954 4995 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.863010 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwdgr\" (UniqueName: \"kubernetes.io/projected/5c59a309-5169-4591-9059-414f361ef107-kube-api-access-bwdgr\") on node \"crc\" DevicePath \"\"" Jan 26 23:25:12 crc kubenswrapper[4995]: I0126 23:25:12.863026 4995 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5c59a309-5169-4591-9059-414f361ef107-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.097991 4995 generic.go:334] "Generic (PLEG): container finished" podID="5c59a309-5169-4591-9059-414f361ef107" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" exitCode=0 Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098047 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerDied","Data":"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa"} Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098134 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" event={"ID":"5c59a309-5169-4591-9059-414f361ef107","Type":"ContainerDied","Data":"9fb2cf74c6bd172d22c4db04ecf00ee3e66d50a61ad6ab006b596012477e9423"} Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098161 4995 scope.go:117] "RemoveContainer" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.098068 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.132149 4995 scope.go:117] "RemoveContainer" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" Jan 26 23:25:13 crc kubenswrapper[4995]: E0126 23:25:13.143509 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa\": container with ID starting with 3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa not found: ID does not exist" containerID="3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.143623 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa"} err="failed to get container status \"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa\": rpc error: code = NotFound desc = could not find container \"3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa\": container with ID starting with 3060b0c281b905b3b86766c9d1ad0344d6de71899de5a66ba60ffd9ef0e917fa not found: ID does not exist" Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.143697 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:13 crc kubenswrapper[4995]: I0126 23:25:13.152265 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-fc744d67-kb4r5"] Jan 26 23:25:14 crc kubenswrapper[4995]: I0126 23:25:14.532590 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c59a309-5169-4591-9059-414f361ef107" path="/var/lib/kubelet/pods/5c59a309-5169-4591-9059-414f361ef107/volumes" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.642944 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: E0126 23:25:24.643788 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.643805 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.643967 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c59a309-5169-4591-9059-414f361ef107" containerName="manager" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.644679 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.646902 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-erlang-cookie" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.647502 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-notifications-svc" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650624 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-default-user" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650977 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-config-data" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650991 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.650984 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openshift-service-ca.crt" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.651746 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"kube-root-ca.crt" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.651869 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-notifications-plugins-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.652039 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-notifications-server-dockercfg-pqnf2" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.668487 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.735571 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736027 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736294 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736526 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736729 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk2rx\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-kube-api-access-hk2rx\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.736970 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54ccebac-5075-4c00-a1e9-ebb66b43876e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737199 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737379 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737597 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.737846 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54ccebac-5075-4c00-a1e9-ebb66b43876e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.738066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839229 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839295 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839356 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839402 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk2rx\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-kube-api-access-hk2rx\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839450 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54ccebac-5075-4c00-a1e9-ebb66b43876e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.839917 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.840015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.840562 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841410 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841568 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841716 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841746 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54ccebac-5075-4c00-a1e9-ebb66b43876e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841808 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.841847 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.842181 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.844274 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/54ccebac-5075-4c00-a1e9-ebb66b43876e-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.846660 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/54ccebac-5075-4c00-a1e9-ebb66b43876e-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.853227 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.853290 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.854333 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/54ccebac-5075-4c00-a1e9-ebb66b43876e-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.865315 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.865354 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da5cb5359468e2c97ef0be615b3e6aea7eec4cdd8c24ba9a8c01b3413d40eb52/globalmount\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.877006 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk2rx\" (UniqueName: \"kubernetes.io/projected/54ccebac-5075-4c00-a1e9-ebb66b43876e-kube-api-access-hk2rx\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.892461 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4e287017-b92a-4413-b433-c1224ce365df\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4e287017-b92a-4413-b433-c1224ce365df\") pod \"rabbitmq-notifications-server-0\" (UID: \"54ccebac-5075-4c00-a1e9-ebb66b43876e\") " pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.918697 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.920478 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.922339 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-default-user" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924491 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-config-data" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924600 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-server-dockercfg-mwzsq" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924791 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"rabbitmq-erlang-cookie" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924872 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-plugins-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.924992 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-rabbitmq-svc" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.926292 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"rabbitmq-server-conf" Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.940702 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 26 23:25:24 crc kubenswrapper[4995]: I0126 23:25:24.969727 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044500 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b909799-2071-4d68-ab55-d29f6e224bf2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044530 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044573 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044612 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044636 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b909799-2071-4d68-ab55-d29f6e224bf2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044651 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044690 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044742 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q4ws\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-kube-api-access-6q4ws\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044765 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.044903 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146091 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146173 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146200 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146228 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b909799-2071-4d68-ab55-d29f6e224bf2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146244 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146262 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146287 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146317 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b909799-2071-4d68-ab55-d29f6e224bf2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146361 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146394 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.146434 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q4ws\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-kube-api-access-6q4ws\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.147154 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.150283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.150612 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.152375 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-config-data\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.153404 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b909799-2071-4d68-ab55-d29f6e224bf2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.168343 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.168675 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b909799-2071-4d68-ab55-d29f6e224bf2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.175025 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.175083 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b1a8ae1cced15c0fefbf855ad861c0e73323158eeb7a4fd7929b5650c51db8d/globalmount\"" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.182417 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.183864 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b909799-2071-4d68-ab55-d29f6e224bf2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.187649 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q4ws\" (UniqueName: \"kubernetes.io/projected/4b909799-2071-4d68-ab55-d29f6e224bf2-kube-api-access-6q4ws\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.225852 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a9adcbf8-3659-45c3-bb80-9dad0f4aad40\") pod \"rabbitmq-server-0\" (UID: \"4b909799-2071-4d68-ab55-d29f6e224bf2\") " pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.247671 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.458502 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-notifications-server-0"] Jan 26 23:25:25 crc kubenswrapper[4995]: W0126 23:25:25.477822 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54ccebac_5075_4c00_a1e9_ebb66b43876e.slice/crio-775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b WatchSource:0}: Error finding container 775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b: Status 404 returned error can't find the container with id 775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b Jan 26 23:25:25 crc kubenswrapper[4995]: I0126 23:25:25.757761 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/rabbitmq-server-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.080412 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.081584 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.088716 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-scripts" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.089142 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config-data" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.091672 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-galera-openstack-svc" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.091799 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"galera-openstack-dockercfg-ghvl2" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.099776 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"combined-ca-bundle" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.103269 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166616 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166662 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166678 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166697 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflxn\" (UniqueName: \"kubernetes.io/projected/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kube-api-access-hflxn\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166717 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166745 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166780 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.166812 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.215518 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerStarted","Data":"dbb80368d8ffcd44825bc5cd37008cb4e27e10851c3195eed5ce2f91495315e9"} Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.216636 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerStarted","Data":"775390884a8358a5084ceecb38099aabefbcba114a3c2aa21ee0469c185d6b0b"} Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268197 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268279 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268305 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268319 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268341 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflxn\" (UniqueName: \"kubernetes.io/projected/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kube-api-access-hflxn\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268357 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268386 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.268425 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.269449 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.270947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.271431 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.272724 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.274469 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.274499 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e0ff4dc62304ff368840298572b97b47aaecc3ae5fd762b3367d1ed0e52e303f/globalmount\"" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.276989 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.289636 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflxn\" (UniqueName: \"kubernetes.io/projected/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-kube-api-access-hflxn\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.292066 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5da7bc3d-c0c7-4935-ba58-c64da8c943b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.328601 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d6c2b23b-cfc7-4feb-8d45-5abba0368ca3\") pod \"openstack-galera-0\" (UID: \"5da7bc3d-c0c7-4935-ba58-c64da8c943b0\") " pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.382374 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.385249 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.397705 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.402432 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.405668 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.405744 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-zzlxj" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.405872 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.470828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.470920 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.470985 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.471001 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.471052 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.572852 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.572940 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.572989 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.573030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.573047 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.573751 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.575310 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.588335 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.588717 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.593570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"memcached-0\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.722424 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.753420 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.754334 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.764622 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"telemetry-ceilometer-dockercfg-8j7tn" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.774914 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.877065 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"kube-state-metrics-0\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.968184 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstack-galera-0"] Jan 26 23:25:26 crc kubenswrapper[4995]: I0126 23:25:26.978724 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"kube-state-metrics-0\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:27 crc kubenswrapper[4995]: W0126 23:25:26.999042 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5da7bc3d_c0c7_4935_ba58_c64da8c943b0.slice/crio-deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246 WatchSource:0}: Error finding container deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246: Status 404 returned error can't find the container with id deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246 Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.031132 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"kube-state-metrics-0\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.094497 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.231231 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerStarted","Data":"deaebca0f127cebf5f87d9e5872c996fe150781d0cd779aba695fe74e06d6246"} Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.357941 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.615929 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.618989 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.624530 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.624934 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-generated" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632300 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-web-config" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632541 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-tls-assets-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632680 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"alertmanager-metric-storage-cluster-tls-config" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.632854 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-alertmanager-dockercfg-x7mks" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.655700 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.699853 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.699928 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.699952 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wbp9\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-kube-api-access-2wbp9\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700094 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700195 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.700307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802024 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wbp9\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-kube-api-access-2wbp9\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802110 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802148 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802185 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802243 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802269 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.802742 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.817383 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.817867 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.819854 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.825757 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5083beb6-ae53-44e5-a82c-872943996b7b-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.828297 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/5083beb6-ae53-44e5-a82c-872943996b7b-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.831453 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.836627 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wbp9\" (UniqueName: \"kubernetes.io/projected/5083beb6-ae53-44e5-a82c-872943996b7b-kube-api-access-2wbp9\") pod \"alertmanager-metric-storage-0\" (UID: \"5083beb6-ae53-44e5-a82c-872943996b7b\") " pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.842993 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.848633 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-kglr7" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.849081 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.882303 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg"] Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.905201 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.905329 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddgj9\" (UniqueName: \"kubernetes.io/projected/403406f0-ed75-4c4d-878b-a21885f105d2-kube-api-access-ddgj9\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:27 crc kubenswrapper[4995]: I0126 23:25:27.947177 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.006777 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.006890 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddgj9\" (UniqueName: \"kubernetes.io/projected/403406f0-ed75-4c4d-878b-a21885f105d2-kube-api-access-ddgj9\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: E0126 23:25:28.007332 4995 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 26 23:25:28 crc kubenswrapper[4995]: E0126 23:25:28.007379 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert podName:403406f0-ed75-4c4d-878b-a21885f105d2 nodeName:}" failed. No retries permitted until 2026-01-26 23:25:28.507364579 +0000 UTC m=+1032.672072044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert") pod "observability-ui-dashboards-66cbf594b5-k62mg" (UID: "403406f0-ed75-4c4d-878b-a21885f105d2") : secret "observability-ui-dashboards" not found Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.032372 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.034627 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.039907 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddgj9\" (UniqueName: \"kubernetes.io/projected/403406f0-ed75-4c4d-878b-a21885f105d2-kube-api-access-ddgj9\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045492 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045646 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045739 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-wlv4m" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045805 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045923 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.045953 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.046063 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.046094 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.055510 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.125913 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142280 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142428 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142511 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142572 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142650 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142692 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142761 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142829 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.142859 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.201532 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-fdbdb9c5b-g5zw8"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.203294 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.231262 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdbdb9c5b-g5zw8"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243816 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243849 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243912 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.243976 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244026 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244056 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.244175 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.246720 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.249445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerStarted","Data":"af898602486bbd8c6c6157c2639e73c909ad485c5d6cbfe7b28ea19f3b85c23d"} Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.250370 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.250816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.251084 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.251157 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07692cb0263c36332c1ef11dc7b21734b21031d82ebacc820f394211727ef21a/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.252044 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.252474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.264173 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.267055 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerStarted","Data":"1fe63fca4fd6cb5199a750cf9e863e7fdd11939b8e0ee09e81633ccef9bdd3c7"} Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.272501 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.276832 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.277960 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347163 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-service-ca\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347468 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-oauth-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347496 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-oauth-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347578 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347602 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-console-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347680 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw7hw\" (UniqueName: \"kubernetes.io/projected/7719e2c4-1e5e-4b93-b161-9126b700549f-kube-api-access-qw7hw\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.347706 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-trusted-ca-bundle\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.388095 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.433062 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.452922 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw7hw\" (UniqueName: \"kubernetes.io/projected/7719e2c4-1e5e-4b93-b161-9126b700549f-kube-api-access-qw7hw\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.452966 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-trusted-ca-bundle\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-service-ca\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453039 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-oauth-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453058 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-oauth-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453127 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453146 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-console-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.453919 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-service-ca\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.454853 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-trusted-ca-bundle\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.458211 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.459349 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-oauth-serving-cert\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.460282 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7719e2c4-1e5e-4b93-b161-9126b700549f-console-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.469949 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7719e2c4-1e5e-4b93-b161-9126b700549f-console-oauth-config\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.493795 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw7hw\" (UniqueName: \"kubernetes.io/projected/7719e2c4-1e5e-4b93-b161-9126b700549f-kube-api-access-qw7hw\") pod \"console-fdbdb9c5b-g5zw8\" (UID: \"7719e2c4-1e5e-4b93-b161-9126b700549f\") " pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.521562 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.558536 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.563675 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/403406f0-ed75-4c4d-878b-a21885f105d2-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-k62mg\" (UID: \"403406f0-ed75-4c4d-878b-a21885f105d2\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.691164 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/alertmanager-metric-storage-0"] Jan 26 23:25:28 crc kubenswrapper[4995]: I0126 23:25:28.800343 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" Jan 26 23:25:29 crc kubenswrapper[4995]: I0126 23:25:29.290133 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"0e6bdda80d541431db425ed666d561751c57a4ce5bae6217b0f3ab0ab6e8e764"} Jan 26 23:25:29 crc kubenswrapper[4995]: I0126 23:25:29.440153 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:25:29 crc kubenswrapper[4995]: I0126 23:25:29.634004 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-fdbdb9c5b-g5zw8"] Jan 26 23:25:30 crc kubenswrapper[4995]: I0126 23:25:30.131216 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg"] Jan 26 23:25:30 crc kubenswrapper[4995]: I0126 23:25:30.306610 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdbdb9c5b-g5zw8" event={"ID":"7719e2c4-1e5e-4b93-b161-9126b700549f","Type":"ContainerStarted","Data":"5ab3a506eb87ef5b1df90e8fcfcb421114b4618b74afb68aaf45363a6d9c0689"} Jan 26 23:25:30 crc kubenswrapper[4995]: I0126 23:25:30.308465 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"09e644cca6d7bb2e34c3abbe27a572044fa392307e8fabe836e1c584f958c8a8"} Jan 26 23:25:30 crc kubenswrapper[4995]: W0126 23:25:30.314180 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod403406f0_ed75_4c4d_878b_a21885f105d2.slice/crio-d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155 WatchSource:0}: Error finding container d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155: Status 404 returned error can't find the container with id d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155 Jan 26 23:25:31 crc kubenswrapper[4995]: I0126 23:25:31.316928 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" event={"ID":"403406f0-ed75-4c4d-878b-a21885f105d2","Type":"ContainerStarted","Data":"d86408ca15a7ce6c32c41e00668136470de4b70a4745c082f0296c1fd9167155"} Jan 26 23:25:32 crc kubenswrapper[4995]: I0126 23:25:32.338640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-fdbdb9c5b-g5zw8" event={"ID":"7719e2c4-1e5e-4b93-b161-9126b700549f","Type":"ContainerStarted","Data":"979f3ec3e484c57d06aebd49b07a0577cf050e7bc007763d171b1dd8799396e6"} Jan 26 23:25:32 crc kubenswrapper[4995]: I0126 23:25:32.382344 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-fdbdb9c5b-g5zw8" podStartSLOduration=4.382327718 podStartE2EDuration="4.382327718s" podCreationTimestamp="2026-01-26 23:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:25:32.380358649 +0000 UTC m=+1036.545066114" watchObservedRunningTime="2026-01-26 23:25:32.382327718 +0000 UTC m=+1036.547035183" Jan 26 23:25:38 crc kubenswrapper[4995]: I0126 23:25:38.526967 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:38 crc kubenswrapper[4995]: I0126 23:25:38.527587 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:38 crc kubenswrapper[4995]: I0126 23:25:38.527913 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:39 crc kubenswrapper[4995]: I0126 23:25:39.391839 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-fdbdb9c5b-g5zw8" Jan 26 23:25:39 crc kubenswrapper[4995]: I0126 23:25:39.485048 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.893395 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.893460 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.893512 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.894235 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:25:40 crc kubenswrapper[4995]: I0126 23:25:40.894585 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576" gracePeriod=600 Jan 26 23:25:41 crc kubenswrapper[4995]: E0126 23:25:41.846382 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 26 23:25:41 crc kubenswrapper[4995]: E0126 23:25:41.846648 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hflxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_watcher-kuttl-default(5da7bc3d-c0c7-4935-ba58-c64da8c943b0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:41 crc kubenswrapper[4995]: E0126 23:25:41.847923 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="5da7bc3d-c0c7-4935-ba58-c64da8c943b0" Jan 26 23:25:42 crc kubenswrapper[4995]: I0126 23:25:42.433628 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576" exitCode=0 Jan 26 23:25:42 crc kubenswrapper[4995]: I0126 23:25:42.433723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576"} Jan 26 23:25:42 crc kubenswrapper[4995]: I0126 23:25:42.433761 4995 scope.go:117] "RemoveContainer" containerID="b4093ba3ef240f4a22dc52fad4871f90a715052046ec4b9cbcd3de91d7cc9c46" Jan 26 23:25:42 crc kubenswrapper[4995]: E0126 23:25:42.436553 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="watcher-kuttl-default/openstack-galera-0" podUID="5da7bc3d-c0c7-4935-ba58-c64da8c943b0" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.270037 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.271024 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q4ws,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_watcher-kuttl-default(4b909799-2071-4d68-ab55-d29f6e224bf2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.272522 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="4b909799-2071-4d68-ab55-d29f6e224bf2" Jan 26 23:25:43 crc kubenswrapper[4995]: E0126 23:25:43.444922 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-server-0" podUID="4b909799-2071-4d68-ab55-d29f6e224bf2" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.119457 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.119722 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n564h5f8h67dh5dbh8chc8h54bhc4h5b5hdh57dh86h678h66ch55h64fh5d9h655h5b5h5dfhdh5b5hc8h556h589h5ffh8dh579hfbh96h697h96q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qjbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_watcher-kuttl-default(37ec7b7e-84e8-4a58-b676-c06ed9a0809e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.120942 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/memcached-0" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.142553 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.142977 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hk2rx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_watcher-kuttl-default(54ccebac-5075-4c00-a1e9-ebb66b43876e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.144191 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="54ccebac-5075-4c00-a1e9-ebb66b43876e" Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.452609 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" event={"ID":"403406f0-ed75-4c4d-878b-a21885f105d2","Type":"ContainerStarted","Data":"c3beeaa724ee3bcb7c246b00a87de1cf72babff4456987ffa30e10064d5c865f"} Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.454865 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69"} Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.456616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerStarted","Data":"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f"} Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.456941 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.458521 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="watcher-kuttl-default/memcached-0" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" Jan 26 23:25:44 crc kubenswrapper[4995]: E0126 23:25:44.458854 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podUID="54ccebac-5075-4c00-a1e9-ebb66b43876e" Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.469962 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-k62mg" podStartSLOduration=3.719457473 podStartE2EDuration="17.469945048s" podCreationTimestamp="2026-01-26 23:25:27 +0000 UTC" firstStartedPulling="2026-01-26 23:25:30.318387857 +0000 UTC m=+1034.483095322" lastFinishedPulling="2026-01-26 23:25:44.068875412 +0000 UTC m=+1048.233582897" observedRunningTime="2026-01-26 23:25:44.468787129 +0000 UTC m=+1048.633494594" watchObservedRunningTime="2026-01-26 23:25:44.469945048 +0000 UTC m=+1048.634652513" Jan 26 23:25:44 crc kubenswrapper[4995]: I0126 23:25:44.558815 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.143186557 podStartE2EDuration="18.558794101s" podCreationTimestamp="2026-01-26 23:25:26 +0000 UTC" firstStartedPulling="2026-01-26 23:25:27.646841037 +0000 UTC m=+1031.811548502" lastFinishedPulling="2026-01-26 23:25:44.062448541 +0000 UTC m=+1048.227156046" observedRunningTime="2026-01-26 23:25:44.553760435 +0000 UTC m=+1048.718467910" watchObservedRunningTime="2026-01-26 23:25:44.558794101 +0000 UTC m=+1048.723501566" Jan 26 23:25:47 crc kubenswrapper[4995]: I0126 23:25:47.480264 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"292a111e21591204ddcff9f67d10ef28cf63c9fa8de4ac90bce69c4c744ab1ac"} Jan 26 23:25:48 crc kubenswrapper[4995]: I0126 23:25:48.492209 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714"} Jan 26 23:25:55 crc kubenswrapper[4995]: I0126 23:25:55.556299 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerStarted","Data":"ebbc3151e1aabd6a0948c68b058ab7ffcc016f7c745ce9b3fb26d1dd0241057b"} Jan 26 23:25:55 crc kubenswrapper[4995]: I0126 23:25:55.563290 4995 generic.go:334] "Generic (PLEG): container finished" podID="5083beb6-ae53-44e5-a82c-872943996b7b" containerID="292a111e21591204ddcff9f67d10ef28cf63c9fa8de4ac90bce69c4c744ab1ac" exitCode=0 Jan 26 23:25:55 crc kubenswrapper[4995]: I0126 23:25:55.563347 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerDied","Data":"292a111e21591204ddcff9f67d10ef28cf63c9fa8de4ac90bce69c4c744ab1ac"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.575415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerStarted","Data":"3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.576629 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.578082 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerStarted","Data":"e7f29c93726d236f06aa9087d1e9d21bb2a28fa032ee9081e34c3fa5089b832d"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.579986 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714" exitCode=0 Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.580007 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714"} Jan 26 23:25:56 crc kubenswrapper[4995]: I0126 23:25:56.612854 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.057420108 podStartE2EDuration="30.612838032s" podCreationTimestamp="2026-01-26 23:25:26 +0000 UTC" firstStartedPulling="2026-01-26 23:25:27.375257247 +0000 UTC m=+1031.539964712" lastFinishedPulling="2026-01-26 23:25:55.930675161 +0000 UTC m=+1060.095382636" observedRunningTime="2026-01-26 23:25:56.610916404 +0000 UTC m=+1060.775623879" watchObservedRunningTime="2026-01-26 23:25:56.612838032 +0000 UTC m=+1060.777545497" Jan 26 23:25:57 crc kubenswrapper[4995]: I0126 23:25:57.099629 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:25:58 crc kubenswrapper[4995]: I0126 23:25:58.617623 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"3ba9dbe6094498b682a56dc9388e05547145b296dc917a1fb2de2a1e7531d322"} Jan 26 23:25:58 crc kubenswrapper[4995]: I0126 23:25:58.619804 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerStarted","Data":"17e91e1277b3bd73ad09330618c0692deb641db4b090da3d0321626052d2c9c3"} Jan 26 23:25:59 crc kubenswrapper[4995]: I0126 23:25:59.628717 4995 generic.go:334] "Generic (PLEG): container finished" podID="5da7bc3d-c0c7-4935-ba58-c64da8c943b0" containerID="ebbc3151e1aabd6a0948c68b058ab7ffcc016f7c745ce9b3fb26d1dd0241057b" exitCode=0 Jan 26 23:25:59 crc kubenswrapper[4995]: I0126 23:25:59.628908 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerDied","Data":"ebbc3151e1aabd6a0948c68b058ab7ffcc016f7c745ce9b3fb26d1dd0241057b"} Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.642025 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstack-galera-0" event={"ID":"5da7bc3d-c0c7-4935-ba58-c64da8c943b0","Type":"ContainerStarted","Data":"919fe0961ef22744aeca9b6012860efea116b94a22f6c927f955f56a31555ab2"} Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.646377 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/alertmanager-metric-storage-0" event={"ID":"5083beb6-ae53-44e5-a82c-872943996b7b","Type":"ContainerStarted","Data":"fa0981348c0a1c624f5558a6dd68d2e8df54a7f49066cbb5483262294a260969"} Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.646898 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.667300 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstack-galera-0" podStartSLOduration=7.682104217 podStartE2EDuration="35.66728024s" podCreationTimestamp="2026-01-26 23:25:25 +0000 UTC" firstStartedPulling="2026-01-26 23:25:27.005664909 +0000 UTC m=+1031.170372374" lastFinishedPulling="2026-01-26 23:25:54.990840932 +0000 UTC m=+1059.155548397" observedRunningTime="2026-01-26 23:26:00.666588723 +0000 UTC m=+1064.831296228" watchObservedRunningTime="2026-01-26 23:26:00.66728024 +0000 UTC m=+1064.831987705" Jan 26 23:26:00 crc kubenswrapper[4995]: I0126 23:26:00.695492 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/alertmanager-metric-storage-0" podStartSLOduration=4.7456273719999995 podStartE2EDuration="33.695466935s" podCreationTimestamp="2026-01-26 23:25:27 +0000 UTC" firstStartedPulling="2026-01-26 23:25:28.917636205 +0000 UTC m=+1033.082343690" lastFinishedPulling="2026-01-26 23:25:57.867475788 +0000 UTC m=+1062.032183253" observedRunningTime="2026-01-26 23:26:00.688159263 +0000 UTC m=+1064.852866758" watchObservedRunningTime="2026-01-26 23:26:00.695466935 +0000 UTC m=+1064.860174420" Jan 26 23:26:01 crc kubenswrapper[4995]: I0126 23:26:01.658863 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/alertmanager-metric-storage-0" Jan 26 23:26:01 crc kubenswrapper[4995]: I0126 23:26:01.739237 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 26 23:26:04 crc kubenswrapper[4995]: I0126 23:26:04.563013 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-567f8c8d56-2j2x6" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" containerID="cri-o://e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" gracePeriod=15 Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.536881 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-567f8c8d56-2j2x6_05869402-35d4-4054-845a-e45b6e9ed633/console/0.log" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.537474 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603206 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603259 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603332 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603392 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603430 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603491 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.603610 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") pod \"05869402-35d4-4054-845a-e45b6e9ed633\" (UID: \"05869402-35d4-4054-845a-e45b6e9ed633\") " Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.604523 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.605378 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config" (OuterVolumeSpecName: "console-config") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.605478 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.605636 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca" (OuterVolumeSpecName: "service-ca") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.609363 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.610425 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd" (OuterVolumeSpecName: "kube-api-access-jdjsd") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "kube-api-access-jdjsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.610507 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "05869402-35d4-4054-845a-e45b6e9ed633" (UID: "05869402-35d4-4054-845a-e45b6e9ed633"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706786 4995 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706873 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdjsd\" (UniqueName: \"kubernetes.io/projected/05869402-35d4-4054-845a-e45b6e9ed633-kube-api-access-jdjsd\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706892 4995 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706909 4995 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706919 4995 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/05869402-35d4-4054-845a-e45b6e9ed633-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706929 4995 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.706937 4995 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/05869402-35d4-4054-845a-e45b6e9ed633-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.709430 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b"} Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712535 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-567f8c8d56-2j2x6_05869402-35d4-4054-845a-e45b6e9ed633/console/0.log" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712589 4995 generic.go:334] "Generic (PLEG): container finished" podID="05869402-35d4-4054-845a-e45b6e9ed633" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" exitCode=2 Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712624 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerDied","Data":"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95"} Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712645 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-567f8c8d56-2j2x6" event={"ID":"05869402-35d4-4054-845a-e45b6e9ed633","Type":"ContainerDied","Data":"caaa99e8918dfe5e0d9cbad0907826dac119f7c0d5e453be225658d7ea0903b4"} Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712666 4995 scope.go:117] "RemoveContainer" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.712666 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-567f8c8d56-2j2x6" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.738351 4995 scope.go:117] "RemoveContainer" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" Jan 26 23:26:05 crc kubenswrapper[4995]: E0126 23:26:05.738754 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95\": container with ID starting with e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95 not found: ID does not exist" containerID="e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.738790 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95"} err="failed to get container status \"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95\": rpc error: code = NotFound desc = could not find container \"e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95\": container with ID starting with e118cd05317e7cd6f1acab853c9ededeae39f6b5f108b5428321e0f38bd4bf95 not found: ID does not exist" Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.754647 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:26:05 crc kubenswrapper[4995]: I0126 23:26:05.762254 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-567f8c8d56-2j2x6"] Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.403286 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.405795 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.526371 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05869402-35d4-4054-845a-e45b6e9ed633" path="/var/lib/kubelet/pods/05869402-35d4-4054-845a-e45b6e9ed633/volumes" Jan 26 23:26:06 crc kubenswrapper[4995]: I0126 23:26:06.789736 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:07 crc kubenswrapper[4995]: I0126 23:26:07.822737 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/openstack-galera-0" Jan 26 23:26:08 crc kubenswrapper[4995]: I0126 23:26:08.744284 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932"} Jan 26 23:26:13 crc kubenswrapper[4995]: I0126 23:26:13.795597 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerStarted","Data":"16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1"} Jan 26 23:26:13 crc kubenswrapper[4995]: I0126 23:26:13.835194 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=4.717680243 podStartE2EDuration="47.835171463s" podCreationTimestamp="2026-01-26 23:25:26 +0000 UTC" firstStartedPulling="2026-01-26 23:25:29.732706514 +0000 UTC m=+1033.897413979" lastFinishedPulling="2026-01-26 23:26:12.850197704 +0000 UTC m=+1077.014905199" observedRunningTime="2026-01-26 23:26:13.826457465 +0000 UTC m=+1077.991164940" watchObservedRunningTime="2026-01-26 23:26:13.835171463 +0000 UTC m=+1077.999878938" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.219027 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:26:15 crc kubenswrapper[4995]: E0126 23:26:15.219426 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.219441 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.219668 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="05869402-35d4-4054-845a-e45b6e9ed633" containerName="console" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.220325 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.226882 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.247693 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.361359 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.361437 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.464021 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.464177 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.464937 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.506224 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"root-account-create-update-tkjsp\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:15 crc kubenswrapper[4995]: I0126 23:26:15.583995 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.043732 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:26:16 crc kubenswrapper[4995]: W0126 23:26:16.064082 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c339608_1d36_448f_b3cd_00252341cf0d.slice/crio-93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055 WatchSource:0}: Error finding container 93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055: Status 404 returned error can't find the container with id 93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055 Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.319077 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.319969 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.363160 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.440514 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.441631 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.444942 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.449056 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.486949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.487274 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590645 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590756 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.590924 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.591712 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.609726 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"keystone-db-create-4fsqw\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.661973 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.692248 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.692369 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.693094 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.708633 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"keystone-ea46-account-create-update-c7cfb\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.765224 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.823310 4995 generic.go:334] "Generic (PLEG): container finished" podID="2c339608-1d36-448f-b3cd-00252341cf0d" containerID="a3d0cf0c24bcaec0a584ae1322d81bc2cc97c571dfb1efe06bea1c6a8030ba2d" exitCode=0 Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.823345 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-tkjsp" event={"ID":"2c339608-1d36-448f-b3cd-00252341cf0d","Type":"ContainerDied","Data":"a3d0cf0c24bcaec0a584ae1322d81bc2cc97c571dfb1efe06bea1c6a8030ba2d"} Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.823369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-tkjsp" event={"ID":"2c339608-1d36-448f-b3cd-00252341cf0d","Type":"ContainerStarted","Data":"93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055"} Jan 26 23:26:16 crc kubenswrapper[4995]: I0126 23:26:16.913270 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.209218 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:26:17 crc kubenswrapper[4995]: W0126 23:26:17.210815 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94023397_a2e2_42cb_8469_003bc383aeaa.slice/crio-ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2 WatchSource:0}: Error finding container ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2: Status 404 returned error can't find the container with id ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2 Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.218018 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-db-secret" Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.836449 4995 generic.go:334] "Generic (PLEG): container finished" podID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerID="87d87779d4c3502bc67575e7abc513b3a091bacd50d75b12711b8a101c37d329" exitCode=0 Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.836531 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-4fsqw" event={"ID":"513f0b17-1707-4c0c-bc81-d7ead6a553c8","Type":"ContainerDied","Data":"87d87779d4c3502bc67575e7abc513b3a091bacd50d75b12711b8a101c37d329"} Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.836562 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-4fsqw" event={"ID":"513f0b17-1707-4c0c-bc81-d7ead6a553c8","Type":"ContainerStarted","Data":"246d83749df994b0e10a3947bc98b209d3e9bc7aade4145b32b167ace5893f6b"} Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.839654 4995 generic.go:334] "Generic (PLEG): container finished" podID="94023397-a2e2-42cb-8469-003bc383aeaa" containerID="02cef367fb01441bf0b8a9914fe6804f776043582c13fe0f23584fe155ab9938" exitCode=0 Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.839726 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" event={"ID":"94023397-a2e2-42cb-8469-003bc383aeaa","Type":"ContainerDied","Data":"02cef367fb01441bf0b8a9914fe6804f776043582c13fe0f23584fe155ab9938"} Jan 26 23:26:17 crc kubenswrapper[4995]: I0126 23:26:17.840080 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" event={"ID":"94023397-a2e2-42cb-8469-003bc383aeaa","Type":"ContainerStarted","Data":"ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2"} Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.253204 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.322695 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") pod \"2c339608-1d36-448f-b3cd-00252341cf0d\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.322757 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") pod \"2c339608-1d36-448f-b3cd-00252341cf0d\" (UID: \"2c339608-1d36-448f-b3cd-00252341cf0d\") " Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.324055 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c339608-1d36-448f-b3cd-00252341cf0d" (UID: "2c339608-1d36-448f-b3cd-00252341cf0d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.333423 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw" (OuterVolumeSpecName: "kube-api-access-hzzkw") pod "2c339608-1d36-448f-b3cd-00252341cf0d" (UID: "2c339608-1d36-448f-b3cd-00252341cf0d"). InnerVolumeSpecName "kube-api-access-hzzkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.424434 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzzkw\" (UniqueName: \"kubernetes.io/projected/2c339608-1d36-448f-b3cd-00252341cf0d-kube-api-access-hzzkw\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.424471 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c339608-1d36-448f-b3cd-00252341cf0d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.434083 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.852685 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/root-account-create-update-tkjsp" event={"ID":"2c339608-1d36-448f-b3cd-00252341cf0d","Type":"ContainerDied","Data":"93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055"} Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.852759 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93c553598dee4a2dec72bb1ca9d8e6f0e17a72e4c444bc0ef778ab9489516055" Jan 26 23:26:18 crc kubenswrapper[4995]: I0126 23:26:18.852849 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/root-account-create-update-tkjsp" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.219777 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.224994 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338048 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") pod \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338152 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") pod \"94023397-a2e2-42cb-8469-003bc383aeaa\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338176 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") pod \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\" (UID: \"513f0b17-1707-4c0c-bc81-d7ead6a553c8\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338240 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") pod \"94023397-a2e2-42cb-8469-003bc383aeaa\" (UID: \"94023397-a2e2-42cb-8469-003bc383aeaa\") " Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338586 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "513f0b17-1707-4c0c-bc81-d7ead6a553c8" (UID: "513f0b17-1707-4c0c-bc81-d7ead6a553c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.338724 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94023397-a2e2-42cb-8469-003bc383aeaa" (UID: "94023397-a2e2-42cb-8469-003bc383aeaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.342705 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j" (OuterVolumeSpecName: "kube-api-access-h9k6j") pod "513f0b17-1707-4c0c-bc81-d7ead6a553c8" (UID: "513f0b17-1707-4c0c-bc81-d7ead6a553c8"). InnerVolumeSpecName "kube-api-access-h9k6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.347289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn" (OuterVolumeSpecName: "kube-api-access-rn7cn") pod "94023397-a2e2-42cb-8469-003bc383aeaa" (UID: "94023397-a2e2-42cb-8469-003bc383aeaa"). InnerVolumeSpecName "kube-api-access-rn7cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440552 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94023397-a2e2-42cb-8469-003bc383aeaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440874 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9k6j\" (UniqueName: \"kubernetes.io/projected/513f0b17-1707-4c0c-bc81-d7ead6a553c8-kube-api-access-h9k6j\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440899 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn7cn\" (UniqueName: \"kubernetes.io/projected/94023397-a2e2-42cb-8469-003bc383aeaa-kube-api-access-rn7cn\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.440917 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/513f0b17-1707-4c0c-bc81-d7ead6a553c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.865843 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-create-4fsqw" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.865866 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-create-4fsqw" event={"ID":"513f0b17-1707-4c0c-bc81-d7ead6a553c8","Type":"ContainerDied","Data":"246d83749df994b0e10a3947bc98b209d3e9bc7aade4145b32b167ace5893f6b"} Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.865955 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="246d83749df994b0e10a3947bc98b209d3e9bc7aade4145b32b167ace5893f6b" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.877980 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" event={"ID":"94023397-a2e2-42cb-8469-003bc383aeaa","Type":"ContainerDied","Data":"ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2"} Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.878043 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebb915993d64506c4891273e5ad0cb862dc0d6281366c488ac36eb2debe220c2" Jan 26 23:26:19 crc kubenswrapper[4995]: I0126 23:26:19.878147 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb" Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.435313 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.439334 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.963310 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b909799-2071-4d68-ab55-d29f6e224bf2" containerID="e7f29c93726d236f06aa9087d1e9d21bb2a28fa032ee9081e34c3fa5089b832d" exitCode=0 Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.963444 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerDied","Data":"e7f29c93726d236f06aa9087d1e9d21bb2a28fa032ee9081e34c3fa5089b832d"} Jan 26 23:26:28 crc kubenswrapper[4995]: I0126 23:26:28.966252 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.973248 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-server-0" event={"ID":"4b909799-2071-4d68-ab55-d29f6e224bf2","Type":"ContainerStarted","Data":"2bbeb0b3af7340893132d357f729117c681feea0d49203b5ba6681c3ed9e4488"} Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.974832 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.976877 4995 generic.go:334] "Generic (PLEG): container finished" podID="54ccebac-5075-4c00-a1e9-ebb66b43876e" containerID="17e91e1277b3bd73ad09330618c0692deb641db4b090da3d0321626052d2c9c3" exitCode=0 Jan 26 23:26:29 crc kubenswrapper[4995]: I0126 23:26:29.976958 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerDied","Data":"17e91e1277b3bd73ad09330618c0692deb641db4b090da3d0321626052d2c9c3"} Jan 26 23:26:30 crc kubenswrapper[4995]: I0126 23:26:30.004690 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-server-0" podStartSLOduration=37.702226673 podStartE2EDuration="1m7.004672707s" podCreationTimestamp="2026-01-26 23:25:23 +0000 UTC" firstStartedPulling="2026-01-26 23:25:25.752198185 +0000 UTC m=+1029.916905700" lastFinishedPulling="2026-01-26 23:25:55.054644249 +0000 UTC m=+1059.219351734" observedRunningTime="2026-01-26 23:26:30.002836301 +0000 UTC m=+1094.167543776" watchObservedRunningTime="2026-01-26 23:26:30.004672707 +0000 UTC m=+1094.169380172" Jan 26 23:26:30 crc kubenswrapper[4995]: I0126 23:26:30.989663 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" event={"ID":"54ccebac-5075-4c00-a1e9-ebb66b43876e","Type":"ContainerStarted","Data":"6fcbc0e6cd5e113b3be60c17a9d7503e8bfa7c29370ea8b503ff58089a08b53c"} Jan 26 23:26:30 crc kubenswrapper[4995]: I0126 23:26:30.990308 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.026930 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" podStartSLOduration=-9223371968.82787 podStartE2EDuration="1m8.026907057s" podCreationTimestamp="2026-01-26 23:25:23 +0000 UTC" firstStartedPulling="2026-01-26 23:25:25.482141342 +0000 UTC m=+1029.646848807" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:26:31.018044155 +0000 UTC m=+1095.182751660" watchObservedRunningTime="2026-01-26 23:26:31.026907057 +0000 UTC m=+1095.191614562" Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.254469 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.254799 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" containerID="cri-o://09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b" gracePeriod=600 Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.255394 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" containerID="cri-o://bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932" gracePeriod=600 Jan 26 23:26:31 crc kubenswrapper[4995]: I0126 23:26:31.255367 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/prometheus-metric-storage-0" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" containerID="cri-o://16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1" gracePeriod=600 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008046 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1" exitCode=0 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1"} Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008617 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932"} Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008563 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932" exitCode=0 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.008662 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerID="09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b" exitCode=0 Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.009216 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b"} Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.200975 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260542 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260632 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260677 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260716 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260911 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.260994 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261034 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261075 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261114 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261152 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") pod \"0d12a498-5a42-42d5-9ab1-12d436c41187\" (UID: \"0d12a498-5a42-42d5-9ab1-12d436c41187\") " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261365 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261707 4995 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261725 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.261857 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266013 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266310 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out" (OuterVolumeSpecName: "config-out") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266598 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp" (OuterVolumeSpecName: "kube-api-access-wtrzp") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "kube-api-access-wtrzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.266962 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.281006 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.283284 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config" (OuterVolumeSpecName: "config") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.286875 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config" (OuterVolumeSpecName: "web-config") pod "0d12a498-5a42-42d5-9ab1-12d436c41187" (UID: "0d12a498-5a42-42d5-9ab1-12d436c41187"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363128 4995 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") on node \"crc\" " Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363176 4995 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0d12a498-5a42-42d5-9ab1-12d436c41187-config-out\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363193 4995 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363212 4995 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363225 4995 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-web-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363238 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtrzp\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-kube-api-access-wtrzp\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363250 4995 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/0d12a498-5a42-42d5-9ab1-12d436c41187-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363262 4995 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0d12a498-5a42-42d5-9ab1-12d436c41187-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.363275 4995 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0d12a498-5a42-42d5-9ab1-12d436c41187-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.391586 4995 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.391784 4995 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5") on node "crc" Jan 26 23:26:32 crc kubenswrapper[4995]: I0126 23:26:32.464681 4995 reconciler_common.go:293] "Volume detached for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") on node \"crc\" DevicePath \"\"" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.018642 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"0d12a498-5a42-42d5-9ab1-12d436c41187","Type":"ContainerDied","Data":"09e644cca6d7bb2e34c3abbe27a572044fa392307e8fabe836e1c584f958c8a8"} Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.018706 4995 scope.go:117] "RemoveContainer" containerID="16d9e079f8d7d37a004ac0ceaa971f9a942ef0d5ffcfa30b1b10720ab9d634c1" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.020012 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.044379 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.051201 4995 scope.go:117] "RemoveContainer" containerID="bd5dfbb02b8531c020e670b9f902d417ae21031bc93d721afb834a5013e17932" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.053030 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.072290 4995 scope.go:117] "RemoveContainer" containerID="09365b795c4ad40149307a21bb9b3674f94b5fbd9fb5e8958df02a30eb16d82b" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.088643 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089239 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089256 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089286 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="init-config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089292 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="init-config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089306 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089311 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089329 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089335 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089351 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerName="mariadb-database-create" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089356 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerName="mariadb-database-create" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089367 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089374 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: E0126 23:26:33.089383 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089388 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089516 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089528 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="config-reloader" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089537 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="prometheus" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089546 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" containerName="thanos-sidecar" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089557 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" containerName="mariadb-account-create-update" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.089569 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" containerName="mariadb-database-create" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.090856 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.100352 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101131 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101437 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101776 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-2" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.101980 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"metric-storage-prometheus-dockercfg-wlv4m" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.103158 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"prometheus-metric-storage-rulefiles-1" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.104181 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-metric-storage-prometheus-svc" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.109772 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-web-config" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.111307 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"prometheus-metric-storage-tls-assets-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.130317 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.148058 4995 scope.go:117] "RemoveContainer" containerID="60fe22fde9a4342de9f3d1074bc86d7eebf6bacf28576a78e4d758d91299a714" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175365 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175407 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6zw6\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-kube-api-access-r6zw6\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175442 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175470 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175519 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175540 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175564 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175583 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175601 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/331b761a-fa99-405f-aedf-a94cb456cdfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175625 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.175689 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278624 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278696 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278742 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278814 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/331b761a-fa99-405f-aedf-a94cb456cdfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278909 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.278962 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279010 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6zw6\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-kube-api-access-r6zw6\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279084 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.279145 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.284396 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.284914 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.288151 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.289752 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/331b761a-fa99-405f-aedf-a94cb456cdfc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.289926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.291946 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.293057 4995 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.293121 4995 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/07692cb0263c36332c1ef11dc7b21734b21031d82ebacc820f394211727ef21a/globalmount\"" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.294248 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.297807 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.297968 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.299363 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/331b761a-fa99-405f-aedf-a94cb456cdfc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.315457 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/331b761a-fa99-405f-aedf-a94cb456cdfc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.316864 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6zw6\" (UniqueName: \"kubernetes.io/projected/331b761a-fa99-405f-aedf-a94cb456cdfc-kube-api-access-r6zw6\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.345012 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-38a50d27-a16b-4ddf-a529-d4d069d847e5\") pod \"prometheus-metric-storage-0\" (UID: \"331b761a-fa99-405f-aedf-a94cb456cdfc\") " pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.481496 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:33 crc kubenswrapper[4995]: I0126 23:26:33.893967 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/prometheus-metric-storage-0"] Jan 26 23:26:34 crc kubenswrapper[4995]: I0126 23:26:34.029500 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"0394c2ef21f8fe7b4cbc1ab82e9ff5627689a1a39aad71cbd3c82f561617a208"} Jan 26 23:26:34 crc kubenswrapper[4995]: I0126 23:26:34.528063 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d12a498-5a42-42d5-9ab1-12d436c41187" path="/var/lib/kubelet/pods/0d12a498-5a42-42d5-9ab1-12d436c41187/volumes" Jan 26 23:26:37 crc kubenswrapper[4995]: I0126 23:26:37.059704 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"de647f9ac8aab99780612f34a28a068c79389bc86568af3d6363169cf9cd3e14"} Jan 26 23:26:44 crc kubenswrapper[4995]: I0126 23:26:44.973456 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-notifications-server-0" Jan 26 23:26:45 crc kubenswrapper[4995]: I0126 23:26:45.126250 4995 generic.go:334] "Generic (PLEG): container finished" podID="331b761a-fa99-405f-aedf-a94cb456cdfc" containerID="de647f9ac8aab99780612f34a28a068c79389bc86568af3d6363169cf9cd3e14" exitCode=0 Jan 26 23:26:45 crc kubenswrapper[4995]: I0126 23:26:45.126298 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerDied","Data":"de647f9ac8aab99780612f34a28a068c79389bc86568af3d6363169cf9cd3e14"} Jan 26 23:26:45 crc kubenswrapper[4995]: I0126 23:26:45.251286 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/rabbitmq-server-0" Jan 26 23:26:46 crc kubenswrapper[4995]: I0126 23:26:46.136345 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"3f7629aff9fe372c60ee1f1fac00dcd8f2a348de8bfcebbf798ef20ad9107e1d"} Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.258876 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.260127 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.262512 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.262736 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.262783 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.263059 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.276498 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.407395 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.407488 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.407602 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.509462 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.509555 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.509596 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.515547 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.519381 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.527076 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"keystone-db-sync-27jdj\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:47 crc kubenswrapper[4995]: I0126 23:26:47.577872 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.081885 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:26:48 crc kubenswrapper[4995]: W0126 23:26:48.089321 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad6fb114_59e8_443d_acd9_7241b8ee783c.slice/crio-643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847 WatchSource:0}: Error finding container 643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847: Status 404 returned error can't find the container with id 643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847 Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.149629 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerStarted","Data":"643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847"} Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.152517 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"672ac3cf1ceefa2116be2b4c4e6819a8f341f8a671d408c38acbba77ad970241"} Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.152567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/prometheus-metric-storage-0" event={"ID":"331b761a-fa99-405f-aedf-a94cb456cdfc","Type":"ContainerStarted","Data":"2847af464585b43a387053c0434865b9b97178de8b8f7bc6e7b4b1d4c2e7dec8"} Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.186387 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/prometheus-metric-storage-0" podStartSLOduration=15.186371836 podStartE2EDuration="15.186371836s" podCreationTimestamp="2026-01-26 23:26:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:26:48.18134457 +0000 UTC m=+1112.346052035" watchObservedRunningTime="2026-01-26 23:26:48.186371836 +0000 UTC m=+1112.351079301" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.481568 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.481607 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:48 crc kubenswrapper[4995]: I0126 23:26:48.490710 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:49 crc kubenswrapper[4995]: I0126 23:26:49.180245 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/prometheus-metric-storage-0" Jan 26 23:26:57 crc kubenswrapper[4995]: I0126 23:26:57.254184 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerStarted","Data":"7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95"} Jan 26 23:26:57 crc kubenswrapper[4995]: I0126 23:26:57.269856 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-db-sync-27jdj" podStartSLOduration=1.756733699 podStartE2EDuration="10.26983843s" podCreationTimestamp="2026-01-26 23:26:47 +0000 UTC" firstStartedPulling="2026-01-26 23:26:48.092744983 +0000 UTC m=+1112.257452438" lastFinishedPulling="2026-01-26 23:26:56.605849684 +0000 UTC m=+1120.770557169" observedRunningTime="2026-01-26 23:26:57.265295916 +0000 UTC m=+1121.430003381" watchObservedRunningTime="2026-01-26 23:26:57.26983843 +0000 UTC m=+1121.434545885" Jan 26 23:27:02 crc kubenswrapper[4995]: I0126 23:27:02.320507 4995 generic.go:334] "Generic (PLEG): container finished" podID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerID="7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95" exitCode=0 Jan 26 23:27:02 crc kubenswrapper[4995]: I0126 23:27:02.320599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerDied","Data":"7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95"} Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.694350 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.824240 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") pod \"ad6fb114-59e8-443d-acd9-7241b8ee783c\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.824331 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") pod \"ad6fb114-59e8-443d-acd9-7241b8ee783c\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.824370 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") pod \"ad6fb114-59e8-443d-acd9-7241b8ee783c\" (UID: \"ad6fb114-59e8-443d-acd9-7241b8ee783c\") " Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.833428 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd" (OuterVolumeSpecName: "kube-api-access-clrjd") pod "ad6fb114-59e8-443d-acd9-7241b8ee783c" (UID: "ad6fb114-59e8-443d-acd9-7241b8ee783c"). InnerVolumeSpecName "kube-api-access-clrjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.848448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad6fb114-59e8-443d-acd9-7241b8ee783c" (UID: "ad6fb114-59e8-443d-acd9-7241b8ee783c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.867537 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data" (OuterVolumeSpecName: "config-data") pod "ad6fb114-59e8-443d-acd9-7241b8ee783c" (UID: "ad6fb114-59e8-443d-acd9-7241b8ee783c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.925749 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.925780 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clrjd\" (UniqueName: \"kubernetes.io/projected/ad6fb114-59e8-443d-acd9-7241b8ee783c-kube-api-access-clrjd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:03 crc kubenswrapper[4995]: I0126 23:27:03.925790 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad6fb114-59e8-443d-acd9-7241b8ee783c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.374357 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-db-sync-27jdj" event={"ID":"ad6fb114-59e8-443d-acd9-7241b8ee783c","Type":"ContainerDied","Data":"643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847"} Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.374412 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="643ed4e226a078320fabf79998d01b8d2f252ac39a159da417ed3ad6af5c8847" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.374487 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-db-sync-27jdj" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.497583 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:04 crc kubenswrapper[4995]: E0126 23:27:04.498016 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerName="keystone-db-sync" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.498040 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerName="keystone-db-sync" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.498236 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" containerName="keystone-db-sync" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.499277 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.502705 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.502878 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.503248 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.503668 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.503921 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.510272 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637740 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637847 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637864 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637886 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.637966 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739454 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739577 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739609 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.739696 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.746055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.746047 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.748195 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.750708 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.765439 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.770575 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"keystone-bootstrap-mjf8m\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.827419 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.882304 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.902696 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.905579 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.906036 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:27:04 crc kubenswrapper[4995]: I0126 23:27:04.906386 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045148 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045620 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045637 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045672 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045689 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.045727 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146807 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146849 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146880 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146898 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146969 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.146998 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.147953 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.148035 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.152338 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.153301 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.156058 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.157812 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.169019 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"ceilometer-0\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.240406 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.425047 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:05 crc kubenswrapper[4995]: I0126 23:27:05.721734 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.395484 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"b30e57df07b8d7a973237b5635e98b0b6195b3d09a9b2387b7e99c853dc62c13"} Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.400450 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerStarted","Data":"314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82"} Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.400480 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerStarted","Data":"6e4ffcbf563bd07699c50048988b8b1ab9175d90361dc0342386b3bb930f4956"} Jan 26 23:27:06 crc kubenswrapper[4995]: I0126 23:27:06.424157 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" podStartSLOduration=2.424130446 podStartE2EDuration="2.424130446s" podCreationTimestamp="2026-01-26 23:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:27:06.416328631 +0000 UTC m=+1130.581036096" watchObservedRunningTime="2026-01-26 23:27:06.424130446 +0000 UTC m=+1130.588837931" Jan 26 23:27:07 crc kubenswrapper[4995]: I0126 23:27:07.042406 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:09 crc kubenswrapper[4995]: I0126 23:27:09.429770 4995 generic.go:334] "Generic (PLEG): container finished" podID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerID="314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82" exitCode=0 Jan 26 23:27:09 crc kubenswrapper[4995]: I0126 23:27:09.429871 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerDied","Data":"314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82"} Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.439641 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908"} Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.839682 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940519 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940587 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940647 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940690 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940740 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.940820 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") pod \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\" (UID: \"3f780111-a9d8-4610-ab38-a2d392cf9bfc\") " Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.945514 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx" (OuterVolumeSpecName: "kube-api-access-j7hnx") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "kube-api-access-j7hnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.960011 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.960167 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.962574 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts" (OuterVolumeSpecName: "scripts") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.967643 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data" (OuterVolumeSpecName: "config-data") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:10 crc kubenswrapper[4995]: I0126 23:27:10.972296 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f780111-a9d8-4610-ab38-a2d392cf9bfc" (UID: "3f780111-a9d8-4610-ab38-a2d392cf9bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045024 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045145 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045298 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045322 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045425 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7hnx\" (UniqueName: \"kubernetes.io/projected/3f780111-a9d8-4610-ab38-a2d392cf9bfc-kube-api-access-j7hnx\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.045445 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f780111-a9d8-4610-ab38-a2d392cf9bfc-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.456864 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" event={"ID":"3f780111-a9d8-4610-ab38-a2d392cf9bfc","Type":"ContainerDied","Data":"6e4ffcbf563bd07699c50048988b8b1ab9175d90361dc0342386b3bb930f4956"} Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.456914 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e4ffcbf563bd07699c50048988b8b1ab9175d90361dc0342386b3bb930f4956" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.456943 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-mjf8m" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.530904 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.537902 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-mjf8m"] Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.615426 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:27:11 crc kubenswrapper[4995]: E0126 23:27:11.616038 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerName="keystone-bootstrap" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.616082 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerName="keystone-bootstrap" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.616439 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" containerName="keystone-bootstrap" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.617417 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625427 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625629 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625818 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.625943 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.626118 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.626211 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.755869 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.755922 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756089 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756187 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756253 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.756338 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858227 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858325 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858397 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858439 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858526 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.858589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.867507 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.868020 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.868235 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.872867 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.873486 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.894885 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"keystone-bootstrap-w6lw7\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:11 crc kubenswrapper[4995]: I0126 23:27:11.974606 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:12 crc kubenswrapper[4995]: I0126 23:27:12.532634 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f780111-a9d8-4610-ab38-a2d392cf9bfc" path="/var/lib/kubelet/pods/3f780111-a9d8-4610-ab38-a2d392cf9bfc/volumes" Jan 26 23:27:12 crc kubenswrapper[4995]: I0126 23:27:12.970172 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:27:12 crc kubenswrapper[4995]: W0126 23:27:12.972726 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod049184a2_2d7f_4107_8a72_197fede36e5b.slice/crio-77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849 WatchSource:0}: Error finding container 77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849: Status 404 returned error can't find the container with id 77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849 Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.479249 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerStarted","Data":"7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02"} Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.479660 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerStarted","Data":"77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849"} Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.483759 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618"} Jan 26 23:27:13 crc kubenswrapper[4995]: I0126 23:27:13.514229 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" podStartSLOduration=2.514200438 podStartE2EDuration="2.514200438s" podCreationTimestamp="2026-01-26 23:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:27:13.508909256 +0000 UTC m=+1137.673616761" watchObservedRunningTime="2026-01-26 23:27:13.514200438 +0000 UTC m=+1137.678907943" Jan 26 23:27:16 crc kubenswrapper[4995]: I0126 23:27:16.509542 4995 generic.go:334] "Generic (PLEG): container finished" podID="049184a2-2d7f-4107-8a72-197fede36e5b" containerID="7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02" exitCode=0 Jan 26 23:27:16 crc kubenswrapper[4995]: I0126 23:27:16.510226 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerDied","Data":"7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02"} Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.896711 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.974247 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.974333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.974385 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.976049 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.976412 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.976834 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") pod \"049184a2-2d7f-4107-8a72-197fede36e5b\" (UID: \"049184a2-2d7f-4107-8a72-197fede36e5b\") " Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.979493 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.979610 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld" (OuterVolumeSpecName: "kube-api-access-8v9ld") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "kube-api-access-8v9ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.984746 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts" (OuterVolumeSpecName: "scripts") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:17.987422 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.011991 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data" (OuterVolumeSpecName: "config-data") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.020183 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "049184a2-2d7f-4107-8a72-197fede36e5b" (UID: "049184a2-2d7f-4107-8a72-197fede36e5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084414 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084451 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084472 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084485 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v9ld\" (UniqueName: \"kubernetes.io/projected/049184a2-2d7f-4107-8a72-197fede36e5b-kube-api-access-8v9ld\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084498 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.084508 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/049184a2-2d7f-4107-8a72-197fede36e5b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.536045 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.537208 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175"} Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.537285 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-w6lw7" event={"ID":"049184a2-2d7f-4107-8a72-197fede36e5b","Type":"ContainerDied","Data":"77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849"} Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.537316 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77847e506d7fe32fe66cb3ece68a2c451ca4dbee6bd3973dfb07be742ab2d849" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.633694 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:27:18 crc kubenswrapper[4995]: E0126 23:27:18.634242 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" containerName="keystone-bootstrap" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.634342 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" containerName="keystone-bootstrap" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.634656 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" containerName="keystone-bootstrap" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.635422 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.638612 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-public-svc" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.638690 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-config-data" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.639096 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-keystone-dockercfg-vx5bj" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.639172 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-keystone-internal-svc" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.639392 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone-scripts" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.642429 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"keystone" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.663644 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697268 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697334 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697368 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697525 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697589 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697684 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697736 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.697917 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.799955 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800151 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800192 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800249 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800285 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.800353 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804147 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804271 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804409 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.804458 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.805580 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.805705 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.813221 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.822233 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"keystone-7cb4bf847-27cbg\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:18 crc kubenswrapper[4995]: I0126 23:27:18.958466 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:19 crc kubenswrapper[4995]: I0126 23:27:19.427587 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:27:19 crc kubenswrapper[4995]: I0126 23:27:19.544096 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerStarted","Data":"c831199d822b765352d7f3cfddb29be2235d20cab03abeb963d2d581104d23cb"} Jan 26 23:27:20 crc kubenswrapper[4995]: I0126 23:27:20.553945 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerStarted","Data":"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110"} Jan 26 23:27:20 crc kubenswrapper[4995]: I0126 23:27:20.555424 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:20 crc kubenswrapper[4995]: I0126 23:27:20.584356 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" podStartSLOduration=2.584329851 podStartE2EDuration="2.584329851s" podCreationTimestamp="2026-01-26 23:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:27:20.578213878 +0000 UTC m=+1144.742921363" watchObservedRunningTime="2026-01-26 23:27:20.584329851 +0000 UTC m=+1144.749037336" Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629178 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerStarted","Data":"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81"} Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629477 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" containerID="cri-o://c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629641 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" containerID="cri-o://e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629742 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" containerID="cri-o://76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.629763 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.632003 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" containerID="cri-o://3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" gracePeriod=30 Jan 26 23:27:27 crc kubenswrapper[4995]: I0126 23:27:27.665379 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.5880977290000002 podStartE2EDuration="23.665354356s" podCreationTimestamp="2026-01-26 23:27:04 +0000 UTC" firstStartedPulling="2026-01-26 23:27:05.719817852 +0000 UTC m=+1129.884525317" lastFinishedPulling="2026-01-26 23:27:26.797074489 +0000 UTC m=+1150.961781944" observedRunningTime="2026-01-26 23:27:27.661348196 +0000 UTC m=+1151.826055661" watchObservedRunningTime="2026-01-26 23:27:27.665354356 +0000 UTC m=+1151.830061821" Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.638473 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" exitCode=0 Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639549 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" exitCode=2 Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639619 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" exitCode=0 Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.638532 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81"} Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639749 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175"} Jan 26 23:27:28 crc kubenswrapper[4995]: I0126 23:27:28.639805 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908"} Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.461752 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617133 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617208 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617281 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617412 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617449 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.617988 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618093 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618155 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") pod \"737871cc-e3fc-48e8-983d-10b3171b8fd8\" (UID: \"737871cc-e3fc-48e8-983d-10b3171b8fd8\") " Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618659 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.618709 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.622686 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl" (OuterVolumeSpecName: "kube-api-access-mtpdl") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "kube-api-access-mtpdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.625898 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts" (OuterVolumeSpecName: "scripts") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.641851 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657457 4995 generic.go:334] "Generic (PLEG): container finished" podID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" exitCode=0 Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657506 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618"} Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657556 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"737871cc-e3fc-48e8-983d-10b3171b8fd8","Type":"ContainerDied","Data":"b30e57df07b8d7a973237b5635e98b0b6195b3d09a9b2387b7e99c853dc62c13"} Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.657578 4995 scope.go:117] "RemoveContainer" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.719836 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720232 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/737871cc-e3fc-48e8-983d-10b3171b8fd8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720256 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720269 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtpdl\" (UniqueName: \"kubernetes.io/projected/737871cc-e3fc-48e8-983d-10b3171b8fd8-kube-api-access-mtpdl\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.720283 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.725431 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data" (OuterVolumeSpecName: "config-data") pod "737871cc-e3fc-48e8-983d-10b3171b8fd8" (UID: "737871cc-e3fc-48e8-983d-10b3171b8fd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.765510 4995 scope.go:117] "RemoveContainer" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.786059 4995 scope.go:117] "RemoveContainer" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.803848 4995 scope.go:117] "RemoveContainer" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.817747 4995 scope.go:117] "RemoveContainer" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.818209 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81\": container with ID starting with 3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81 not found: ID does not exist" containerID="3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818264 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81"} err="failed to get container status \"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81\": rpc error: code = NotFound desc = could not find container \"3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81\": container with ID starting with 3647aed124bde3ed5330778fa1b8c8fad1d250045390485e3c828fc86d7c8a81 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818302 4995 scope.go:117] "RemoveContainer" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.818860 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175\": container with ID starting with 76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175 not found: ID does not exist" containerID="76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818888 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175"} err="failed to get container status \"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175\": rpc error: code = NotFound desc = could not find container \"76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175\": container with ID starting with 76b5fb8d8e0904bca17d95c9bb5f67d3224ffc2837098ff6efb7619797a46175 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.818908 4995 scope.go:117] "RemoveContainer" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.819163 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618\": container with ID starting with e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618 not found: ID does not exist" containerID="e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.819195 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618"} err="failed to get container status \"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618\": rpc error: code = NotFound desc = could not find container \"e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618\": container with ID starting with e87249341b78f742be67af4c39f3e5abcab9b5fe46a16a0e0d3dffe7dfe86618 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.819214 4995 scope.go:117] "RemoveContainer" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" Jan 26 23:27:30 crc kubenswrapper[4995]: E0126 23:27:30.820575 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908\": container with ID starting with c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908 not found: ID does not exist" containerID="c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.820616 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908"} err="failed to get container status \"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908\": rpc error: code = NotFound desc = could not find container \"c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908\": container with ID starting with c5363f43d160631c1385cebe4ee1b8564bc8596bd21f5f15e4793bfc8096c908 not found: ID does not exist" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.821943 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:30 crc kubenswrapper[4995]: I0126 23:27:30.822020 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/737871cc-e3fc-48e8-983d-10b3171b8fd8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.006794 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.018556 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029220 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029702 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029729 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029759 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029772 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029797 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029810 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.029826 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.029838 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030171 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="sg-core" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030194 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="proxy-httpd" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030223 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-notification-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.030240 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" containerName="ceilometer-central-agent" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.037468 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.037593 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.040145 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.040795 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.118059 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:31 crc kubenswrapper[4995]: E0126 23:27:31.118916 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-7pjjh log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="watcher-kuttl-default/ceilometer-0" podUID="88e61da0-4417-469b-a34d-ebdc2c449e85" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.126978 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127033 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127141 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127164 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127318 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127604 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.127663 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229209 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229369 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229409 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229458 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229500 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229573 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.229607 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.230257 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.230455 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.233625 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.235857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.235925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.237393 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.245627 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"ceilometer-0\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.668515 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.702976 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839170 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839271 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839310 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839446 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839472 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.839517 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") pod \"88e61da0-4417-469b-a34d-ebdc2c449e85\" (UID: \"88e61da0-4417-469b-a34d-ebdc2c449e85\") " Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.840548 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.841194 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.846061 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.846293 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh" (OuterVolumeSpecName: "kube-api-access-7pjjh") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "kube-api-access-7pjjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.847190 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data" (OuterVolumeSpecName: "config-data") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.847289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts" (OuterVolumeSpecName: "scripts") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.853409 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88e61da0-4417-469b-a34d-ebdc2c449e85" (UID: "88e61da0-4417-469b-a34d-ebdc2c449e85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941631 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941669 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941682 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941690 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941699 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pjjh\" (UniqueName: \"kubernetes.io/projected/88e61da0-4417-469b-a34d-ebdc2c449e85-kube-api-access-7pjjh\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941707 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88e61da0-4417-469b-a34d-ebdc2c449e85-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:31 crc kubenswrapper[4995]: I0126 23:27:31.941717 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88e61da0-4417-469b-a34d-ebdc2c449e85-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.528553 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="737871cc-e3fc-48e8-983d-10b3171b8fd8" path="/var/lib/kubelet/pods/737871cc-e3fc-48e8-983d-10b3171b8fd8/volumes" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.674434 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.759293 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.770497 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.777369 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.779518 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.782074 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.782331 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.784056 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.857934 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858160 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858238 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858403 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858501 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858607 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.858683 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960082 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960206 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960247 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960277 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960320 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960398 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960449 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.960881 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.961515 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.963921 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.964651 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.967753 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.973903 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:32 crc kubenswrapper[4995]: I0126 23:27:32.981477 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"ceilometer-0\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:33 crc kubenswrapper[4995]: I0126 23:27:33.102321 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:33 crc kubenswrapper[4995]: I0126 23:27:33.554749 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:27:33 crc kubenswrapper[4995]: W0126 23:27:33.560230 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf03d11d7_e58e_4d08_85b4_c512e9deb887.slice/crio-b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625 WatchSource:0}: Error finding container b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625: Status 404 returned error can't find the container with id b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625 Jan 26 23:27:33 crc kubenswrapper[4995]: I0126 23:27:33.687870 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625"} Jan 26 23:27:34 crc kubenswrapper[4995]: I0126 23:27:34.527913 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e61da0-4417-469b-a34d-ebdc2c449e85" path="/var/lib/kubelet/pods/88e61da0-4417-469b-a34d-ebdc2c449e85/volumes" Jan 26 23:27:34 crc kubenswrapper[4995]: I0126 23:27:34.698575 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c"} Jan 26 23:27:35 crc kubenswrapper[4995]: I0126 23:27:35.707376 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4"} Jan 26 23:27:36 crc kubenswrapper[4995]: I0126 23:27:36.720849 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326"} Jan 26 23:27:37 crc kubenswrapper[4995]: I0126 23:27:37.731865 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerStarted","Data":"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91"} Jan 26 23:27:37 crc kubenswrapper[4995]: I0126 23:27:37.732180 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:27:37 crc kubenswrapper[4995]: I0126 23:27:37.777471 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.505734491 podStartE2EDuration="5.777448722s" podCreationTimestamp="2026-01-26 23:27:32 +0000 UTC" firstStartedPulling="2026-01-26 23:27:33.563479101 +0000 UTC m=+1157.728186566" lastFinishedPulling="2026-01-26 23:27:36.835193322 +0000 UTC m=+1160.999900797" observedRunningTime="2026-01-26 23:27:37.769657637 +0000 UTC m=+1161.934365112" watchObservedRunningTime="2026-01-26 23:27:37.777448722 +0000 UTC m=+1161.942156187" Jan 26 23:27:50 crc kubenswrapper[4995]: I0126 23:27:50.485272 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.270484 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.272657 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.273143 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.287740 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"openstack-config" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.287827 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstackclient-openstackclient-dockercfg-r4pnv" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.288026 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"openstack-config-secret" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.288086 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.293186 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.300515 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: E0126 23:27:52.305871 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-hwg4s openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[combined-ca-bundle kube-api-access-hwg4s openstack-config openstack-config-secret]: context canceled" pod="watcher-kuttl-default/openstackclient" podUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.311159 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.312475 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.315523 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324616 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2pjk\" (UniqueName: \"kubernetes.io/projected/f27553d1-06f5-4e72-9d14-714d48fbd854-kube-api-access-l2pjk\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324654 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config-secret\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324688 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.324719 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428524 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2pjk\" (UniqueName: \"kubernetes.io/projected/f27553d1-06f5-4e72-9d14-714d48fbd854-kube-api-access-l2pjk\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428579 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config-secret\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428623 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.428659 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.429703 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.444141 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.445578 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f27553d1-06f5-4e72-9d14-714d48fbd854-openstack-config-secret\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.465754 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2pjk\" (UniqueName: \"kubernetes.io/projected/f27553d1-06f5-4e72-9d14-714d48fbd854-kube-api-access-l2pjk\") pod \"openstackclient\" (UID: \"f27553d1-06f5-4e72-9d14-714d48fbd854\") " pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.525683 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" path="/var/lib/kubelet/pods/5eee249b-5796-4844-a0e8-ae9fceb1ed44/volumes" Jan 26 23:27:52 crc kubenswrapper[4995]: I0126 23:27:52.628878 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.115554 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/openstackclient"] Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.253827 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.253814 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"f27553d1-06f5-4e72-9d14-714d48fbd854","Type":"ContainerStarted","Data":"ceabf1d326bea8f0d86c6be859d72952a99fe400b127cbdc9c9578e248ba9271"} Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.258150 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.268776 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:53 crc kubenswrapper[4995]: I0126 23:27:53.272659 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:54 crc kubenswrapper[4995]: I0126 23:27:54.261275 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/openstackclient" Jan 26 23:27:54 crc kubenswrapper[4995]: I0126 23:27:54.265178 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:27:54 crc kubenswrapper[4995]: I0126 23:27:54.276772 4995 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="watcher-kuttl-default/openstackclient" oldPodUID="5eee249b-5796-4844-a0e8-ae9fceb1ed44" podUID="f27553d1-06f5-4e72-9d14-714d48fbd854" Jan 26 23:28:03 crc kubenswrapper[4995]: I0126 23:28:03.111869 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:03 crc kubenswrapper[4995]: I0126 23:28:03.342567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/openstackclient" event={"ID":"f27553d1-06f5-4e72-9d14-714d48fbd854","Type":"ContainerStarted","Data":"ba7945f7293bcf7b5a5ab4ffeb2793509255cd0a694a783c3fff9fa88b57d590"} Jan 26 23:28:03 crc kubenswrapper[4995]: I0126 23:28:03.366940 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/openstackclient" podStartSLOduration=3.312140655 podStartE2EDuration="12.366919396s" podCreationTimestamp="2026-01-26 23:27:51 +0000 UTC" firstStartedPulling="2026-01-26 23:27:53.111428702 +0000 UTC m=+1177.276136167" lastFinishedPulling="2026-01-26 23:28:02.166207443 +0000 UTC m=+1186.330914908" observedRunningTime="2026-01-26 23:28:03.36188784 +0000 UTC m=+1187.526595305" watchObservedRunningTime="2026-01-26 23:28:03.366919396 +0000 UTC m=+1187.531626881" Jan 26 23:28:05 crc kubenswrapper[4995]: I0126 23:28:05.694519 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:05 crc kubenswrapper[4995]: I0126 23:28:05.695806 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/kube-state-metrics-0" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" containerID="cri-o://dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.106429 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.259539 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") pod \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\" (UID: \"f3e7ef92-19e4-45be-ba39-e8c1b10c2110\") " Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.267356 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4" (OuterVolumeSpecName: "kube-api-access-2vjq4") pod "f3e7ef92-19e4-45be-ba39-e8c1b10c2110" (UID: "f3e7ef92-19e4-45be-ba39-e8c1b10c2110"). InnerVolumeSpecName "kube-api-access-2vjq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.361716 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vjq4\" (UniqueName: \"kubernetes.io/projected/f3e7ef92-19e4-45be-ba39-e8c1b10c2110-kube-api-access-2vjq4\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365655 4995 generic.go:334] "Generic (PLEG): container finished" podID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" exitCode=2 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365697 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerDied","Data":"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f"} Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365729 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"f3e7ef92-19e4-45be-ba39-e8c1b10c2110","Type":"ContainerDied","Data":"af898602486bbd8c6c6157c2639e73c909ad485c5d6cbfe7b28ea19f3b85c23d"} Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365746 4995 scope.go:117] "RemoveContainer" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.365764 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.390442 4995 scope.go:117] "RemoveContainer" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" Jan 26 23:28:06 crc kubenswrapper[4995]: E0126 23:28:06.396098 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f\": container with ID starting with dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f not found: ID does not exist" containerID="dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.396183 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f"} err="failed to get container status \"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f\": rpc error: code = NotFound desc = could not find container \"dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f\": container with ID starting with dc51943b7e39300c36487d9523e083e9a33ccd5bb845547c61c399722086814f not found: ID does not exist" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.407761 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.424257 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.431876 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: E0126 23:28:06.432235 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.432252 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.432413 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" containerName="kube-state-metrics" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.433067 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.439556 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-kube-state-metrics-svc" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.439556 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"kube-state-metrics-tls-config" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.444850 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.525967 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e7ef92-19e4-45be-ba39-e8c1b10c2110" path="/var/lib/kubelet/pods/f3e7ef92-19e4-45be-ba39-e8c1b10c2110/volumes" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564313 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564371 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564418 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r6zk\" (UniqueName: \"kubernetes.io/projected/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-api-access-5r6zk\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.564530 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.665978 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r6zk\" (UniqueName: \"kubernetes.io/projected/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-api-access-5r6zk\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.666117 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.666191 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.666218 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.671329 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.676970 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.682294 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r6zk\" (UniqueName: \"kubernetes.io/projected/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-api-access-5r6zk\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.682889 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/86cef714-2c2e-4825-bab7-c653df90a3c2-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"86cef714-2c2e-4825-bab7-c653df90a3c2\") " pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.763404 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.773053 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.774781 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" containerID="cri-o://36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.774960 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" containerID="cri-o://3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.775017 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" containerID="cri-o://9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" gracePeriod=30 Jan 26 23:28:06 crc kubenswrapper[4995]: I0126 23:28:06.775058 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" containerID="cri-o://56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" gracePeriod=30 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.258150 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/kube-state-metrics-0"] Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376414 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" exitCode=0 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376670 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" exitCode=2 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376681 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" exitCode=0 Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376472 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91"} Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376792 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326"} Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.376812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c"} Jan 26 23:28:07 crc kubenswrapper[4995]: I0126 23:28:07.378008 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"86cef714-2c2e-4825-bab7-c653df90a3c2","Type":"ContainerStarted","Data":"9bce68767181e4090036940686dd8e2de04500a5fba4896213b150fe5871ac82"} Jan 26 23:28:08 crc kubenswrapper[4995]: I0126 23:28:08.387375 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/kube-state-metrics-0" event={"ID":"86cef714-2c2e-4825-bab7-c653df90a3c2","Type":"ContainerStarted","Data":"1acae8244e0225e2d330296570ce9e2e40e184a2f56eb510025212d14673224c"} Jan 26 23:28:08 crc kubenswrapper[4995]: I0126 23:28:08.387800 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:08 crc kubenswrapper[4995]: I0126 23:28:08.408425 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/kube-state-metrics-0" podStartSLOduration=2.050775055 podStartE2EDuration="2.408393069s" podCreationTimestamp="2026-01-26 23:28:06 +0000 UTC" firstStartedPulling="2026-01-26 23:28:07.252005462 +0000 UTC m=+1191.416712927" lastFinishedPulling="2026-01-26 23:28:07.609623466 +0000 UTC m=+1191.774330941" observedRunningTime="2026-01-26 23:28:08.400490911 +0000 UTC m=+1192.565198376" watchObservedRunningTime="2026-01-26 23:28:08.408393069 +0000 UTC m=+1192.573100564" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.052056 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.053285 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.070435 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.160310 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.161369 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.163452 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.184516 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.205249 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.205328 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306524 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306582 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306616 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.306650 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.307444 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.326947 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"watcher-db-create-hmlpp\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.369943 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.415267 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.415328 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.417314 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.437068 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"watcher-ea1c-account-create-update-9lt5d\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.481448 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:09 crc kubenswrapper[4995]: I0126 23:28:09.630617 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.060700 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:28:10 crc kubenswrapper[4995]: W0126 23:28:10.076161 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26594adb_ad3b_4555_a2a2_085ac874b80f.slice/crio-fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934 WatchSource:0}: Error finding container fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934: Status 404 returned error can't find the container with id fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934 Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.414902 4995 generic.go:334] "Generic (PLEG): container finished" podID="81de5920-673a-4656-812a-cd9418a924ad" containerID="21fc0623b802d82a641a134593a6142947f12ed59ae9a3e0731b353104bba872" exitCode=0 Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.414970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hmlpp" event={"ID":"81de5920-673a-4656-812a-cd9418a924ad","Type":"ContainerDied","Data":"21fc0623b802d82a641a134593a6142947f12ed59ae9a3e0731b353104bba872"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.415306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hmlpp" event={"ID":"81de5920-673a-4656-812a-cd9418a924ad","Type":"ContainerStarted","Data":"0d1ae533ea537da47e0e95a9947b701f4ba1a47c8850e06cdbb1c339c7758d17"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.416893 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerStarted","Data":"5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.416939 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerStarted","Data":"fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934"} Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.452847 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" podStartSLOduration=1.452807395 podStartE2EDuration="1.452807395s" podCreationTimestamp="2026-01-26 23:28:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:28:10.451487722 +0000 UTC m=+1194.616195197" watchObservedRunningTime="2026-01-26 23:28:10.452807395 +0000 UTC m=+1194.617514860" Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.894316 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:28:10 crc kubenswrapper[4995]: I0126 23:28:10.894365 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.213696 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.252909 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.252989 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253027 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253069 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253193 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253221 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.253870 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.254297 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.259356 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg" (OuterVolumeSpecName: "kube-api-access-s4dwg") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "kube-api-access-s4dwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.269732 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts" (OuterVolumeSpecName: "scripts") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.286846 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.335815 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.355217 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data" (OuterVolumeSpecName: "config-data") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.355733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") pod \"f03d11d7-e58e-4d08-85b4-c512e9deb887\" (UID: \"f03d11d7-e58e-4d08-85b4-c512e9deb887\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356214 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f03d11d7-e58e-4d08-85b4-c512e9deb887" (UID: "f03d11d7-e58e-4d08-85b4-c512e9deb887"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356566 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f03d11d7-e58e-4d08-85b4-c512e9deb887-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356592 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4dwg\" (UniqueName: \"kubernetes.io/projected/f03d11d7-e58e-4d08-85b4-c512e9deb887-kube-api-access-s4dwg\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356605 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356618 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356629 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.356639 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03d11d7-e58e-4d08-85b4-c512e9deb887-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.424842 4995 generic.go:334] "Generic (PLEG): container finished" podID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerID="5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964" exitCode=0 Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.424927 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerDied","Data":"5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964"} Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427288 4995 generic.go:334] "Generic (PLEG): container finished" podID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" exitCode=0 Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427331 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4"} Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427379 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f03d11d7-e58e-4d08-85b4-c512e9deb887","Type":"ContainerDied","Data":"b81f7b39adcb26c2a82824c28d1ccdf73fe6f1cd66212b80b1c42bda2ae42625"} Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427386 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.427397 4995 scope.go:117] "RemoveContainer" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.449897 4995 scope.go:117] "RemoveContainer" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.463355 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.469707 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.488139 4995 scope.go:117] "RemoveContainer" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.500067 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.500718 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.500793 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.500865 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.500923 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.501011 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501071 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.501161 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501227 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501476 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-central-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501555 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="sg-core" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501627 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="ceilometer-notification-agent" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.501696 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" containerName="proxy-httpd" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.504340 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.509389 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.510200 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.510329 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.527698 4995 scope.go:117] "RemoveContainer" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.534853 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.581197 4995 scope.go:117] "RemoveContainer" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.582808 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91\": container with ID starting with 3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91 not found: ID does not exist" containerID="3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.582917 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91"} err="failed to get container status \"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91\": rpc error: code = NotFound desc = could not find container \"3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91\": container with ID starting with 3b405ee35884e1c5a35a15063d03c404793534c743d1fbe44b67de1a81c6cc91 not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.582991 4995 scope.go:117] "RemoveContainer" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.584019 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326\": container with ID starting with 9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326 not found: ID does not exist" containerID="9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584067 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326"} err="failed to get container status \"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326\": rpc error: code = NotFound desc = could not find container \"9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326\": container with ID starting with 9fbc52fdae60101dec83973fb46326a7f214a0620eebf8ab95eb8116e2615326 not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584096 4995 scope.go:117] "RemoveContainer" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.584492 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4\": container with ID starting with 56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4 not found: ID does not exist" containerID="56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584515 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4"} err="failed to get container status \"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4\": rpc error: code = NotFound desc = could not find container \"56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4\": container with ID starting with 56e809dfa896e05d2350e990b7e537d8d560bbee1b1461a368664493538379a4 not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584531 4995 scope.go:117] "RemoveContainer" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" Jan 26 23:28:11 crc kubenswrapper[4995]: E0126 23:28:11.584724 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c\": container with ID starting with 36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c not found: ID does not exist" containerID="36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.584745 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c"} err="failed to get container status \"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c\": rpc error: code = NotFound desc = could not find container \"36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c\": container with ID starting with 36bfc0959d1e400bc77bfe2f507bc0bce17424e486cf29adff71e7f1c50c9e2c not found: ID does not exist" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662556 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662670 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662698 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662724 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662776 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662800 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.662833 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764154 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764232 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764256 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764280 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764312 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764353 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764369 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764389 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.764908 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.765230 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.769298 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.770092 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.770694 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.772582 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.788356 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.790167 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"ceilometer-0\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.825881 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.829661 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.972681 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") pod \"81de5920-673a-4656-812a-cd9418a924ad\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.972759 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") pod \"81de5920-673a-4656-812a-cd9418a924ad\" (UID: \"81de5920-673a-4656-812a-cd9418a924ad\") " Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.973704 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81de5920-673a-4656-812a-cd9418a924ad" (UID: "81de5920-673a-4656-812a-cd9418a924ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.974169 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81de5920-673a-4656-812a-cd9418a924ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:11 crc kubenswrapper[4995]: I0126 23:28:11.977725 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb" (OuterVolumeSpecName: "kube-api-access-cwxdb") pod "81de5920-673a-4656-812a-cd9418a924ad" (UID: "81de5920-673a-4656-812a-cd9418a924ad"). InnerVolumeSpecName "kube-api-access-cwxdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.075757 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwxdb\" (UniqueName: \"kubernetes.io/projected/81de5920-673a-4656-812a-cd9418a924ad-kube-api-access-cwxdb\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.271867 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.436439 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-hmlpp" event={"ID":"81de5920-673a-4656-812a-cd9418a924ad","Type":"ContainerDied","Data":"0d1ae533ea537da47e0e95a9947b701f4ba1a47c8850e06cdbb1c339c7758d17"} Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.436782 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d1ae533ea537da47e0e95a9947b701f4ba1a47c8850e06cdbb1c339c7758d17" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.436467 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-hmlpp" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.438790 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"dbe0ac9e615dc8b84fc279cb1855295fa12e48224c261aad6672dc012a0042f7"} Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.533001 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03d11d7-e58e-4d08-85b4-c512e9deb887" path="/var/lib/kubelet/pods/f03d11d7-e58e-4d08-85b4-c512e9deb887/volumes" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.865957 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.990309 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") pod \"26594adb-ad3b-4555-a2a2-085ac874b80f\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.990465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") pod \"26594adb-ad3b-4555-a2a2-085ac874b80f\" (UID: \"26594adb-ad3b-4555-a2a2-085ac874b80f\") " Jan 26 23:28:12 crc kubenswrapper[4995]: I0126 23:28:12.996624 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26594adb-ad3b-4555-a2a2-085ac874b80f" (UID: "26594adb-ad3b-4555-a2a2-085ac874b80f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.002274 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v" (OuterVolumeSpecName: "kube-api-access-d7n5v") pod "26594adb-ad3b-4555-a2a2-085ac874b80f" (UID: "26594adb-ad3b-4555-a2a2-085ac874b80f"). InnerVolumeSpecName "kube-api-access-d7n5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.091979 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7n5v\" (UniqueName: \"kubernetes.io/projected/26594adb-ad3b-4555-a2a2-085ac874b80f-kube-api-access-d7n5v\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.092018 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26594adb-ad3b-4555-a2a2-085ac874b80f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.450114 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" event={"ID":"26594adb-ad3b-4555-a2a2-085ac874b80f","Type":"ContainerDied","Data":"fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934"} Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.450155 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc6aa815a9ea9f36e2e2e92e7fd82dae698f70f9233899d7ebcf73bb2ad3f934" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.450204 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d" Jan 26 23:28:13 crc kubenswrapper[4995]: I0126 23:28:13.452739 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33"} Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.398571 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:28:14 crc kubenswrapper[4995]: E0126 23:28:14.399349 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81de5920-673a-4656-812a-cd9418a924ad" containerName="mariadb-database-create" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399361 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="81de5920-673a-4656-812a-cd9418a924ad" containerName="mariadb-database-create" Jan 26 23:28:14 crc kubenswrapper[4995]: E0126 23:28:14.399384 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerName="mariadb-account-create-update" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399390 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerName="mariadb-account-create-update" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399525 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" containerName="mariadb-account-create-update" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.399543 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="81de5920-673a-4656-812a-cd9418a924ad" containerName="mariadb-database-create" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.400033 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.401909 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.402176 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-wtfkp" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.416883 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.485879 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0"} Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.486171 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885"} Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515364 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515443 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515497 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.515526 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.617260 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.617354 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.618057 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.618217 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.621844 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.621876 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.625468 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.650616 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"watcher-kuttl-db-sync-7k9rx\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:14 crc kubenswrapper[4995]: I0126 23:28:14.727943 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:15 crc kubenswrapper[4995]: I0126 23:28:15.169189 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:28:15 crc kubenswrapper[4995]: I0126 23:28:15.498217 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerStarted","Data":"388c50d18a23e10226005bb761b532a970180186cfbc322bfd8ac5e8e2e0d0dd"} Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.547506 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerStarted","Data":"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a"} Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.548012 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.595147 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.037417901 podStartE2EDuration="5.595127609s" podCreationTimestamp="2026-01-26 23:28:11 +0000 UTC" firstStartedPulling="2026-01-26 23:28:12.27759578 +0000 UTC m=+1196.442303245" lastFinishedPulling="2026-01-26 23:28:15.835305488 +0000 UTC m=+1200.000012953" observedRunningTime="2026-01-26 23:28:16.589909299 +0000 UTC m=+1200.754616764" watchObservedRunningTime="2026-01-26 23:28:16.595127609 +0000 UTC m=+1200.759835074" Jan 26 23:28:16 crc kubenswrapper[4995]: I0126 23:28:16.790766 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/kube-state-metrics-0" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.250515 4995 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.251158 4995 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.251298 4995 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-kuttl-db-sync,Image:38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zppxx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-kuttl-db-sync-7k9rx_watcher-kuttl-default(c14084ec-5346-48a7-8e93-0d4638601584): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.252540 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" podUID="c14084ec-5346-48a7-8e93-0d4638601584" Jan 26 23:28:32 crc kubenswrapper[4995]: E0126 23:28:32.682387 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-kuttl-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.223:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" podUID="c14084ec-5346-48a7-8e93-0d4638601584" Jan 26 23:28:40 crc kubenswrapper[4995]: I0126 23:28:40.894373 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:28:40 crc kubenswrapper[4995]: I0126 23:28:40.895080 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:28:41 crc kubenswrapper[4995]: I0126 23:28:41.834819 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:28:48 crc kubenswrapper[4995]: I0126 23:28:48.817365 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerStarted","Data":"2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb"} Jan 26 23:28:48 crc kubenswrapper[4995]: I0126 23:28:48.837225 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" podStartSLOduration=2.364789971 podStartE2EDuration="34.83720559s" podCreationTimestamp="2026-01-26 23:28:14 +0000 UTC" firstStartedPulling="2026-01-26 23:28:15.169724379 +0000 UTC m=+1199.334431844" lastFinishedPulling="2026-01-26 23:28:47.642139998 +0000 UTC m=+1231.806847463" observedRunningTime="2026-01-26 23:28:48.836005 +0000 UTC m=+1233.000712455" watchObservedRunningTime="2026-01-26 23:28:48.83720559 +0000 UTC m=+1233.001913055" Jan 26 23:28:51 crc kubenswrapper[4995]: I0126 23:28:51.845338 4995 generic.go:334] "Generic (PLEG): container finished" podID="c14084ec-5346-48a7-8e93-0d4638601584" containerID="2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb" exitCode=0 Jan 26 23:28:51 crc kubenswrapper[4995]: I0126 23:28:51.845456 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerDied","Data":"2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb"} Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.247501 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307728 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307847 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.307895 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") pod \"c14084ec-5346-48a7-8e93-0d4638601584\" (UID: \"c14084ec-5346-48a7-8e93-0d4638601584\") " Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.334024 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.351795 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.352306 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx" (OuterVolumeSpecName: "kube-api-access-zppxx") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "kube-api-access-zppxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.396212 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data" (OuterVolumeSpecName: "config-data") pod "c14084ec-5346-48a7-8e93-0d4638601584" (UID: "c14084ec-5346-48a7-8e93-0d4638601584"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409097 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409164 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409177 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zppxx\" (UniqueName: \"kubernetes.io/projected/c14084ec-5346-48a7-8e93-0d4638601584-kube-api-access-zppxx\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.409189 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c14084ec-5346-48a7-8e93-0d4638601584-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.864426 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" event={"ID":"c14084ec-5346-48a7-8e93-0d4638601584","Type":"ContainerDied","Data":"388c50d18a23e10226005bb761b532a970180186cfbc322bfd8ac5e8e2e0d0dd"} Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.864480 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="388c50d18a23e10226005bb761b532a970180186cfbc322bfd8ac5e8e2e0d0dd" Jan 26 23:28:53 crc kubenswrapper[4995]: I0126 23:28:53.864511 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.337964 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: E0126 23:28:54.338676 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14084ec-5346-48a7-8e93-0d4638601584" containerName="watcher-kuttl-db-sync" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.338704 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14084ec-5346-48a7-8e93-0d4638601584" containerName="watcher-kuttl-db-sync" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.338908 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14084ec-5346-48a7-8e93-0d4638601584" containerName="watcher-kuttl-db-sync" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.339586 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.341879 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-wtfkp" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.341997 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.346978 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.348727 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.351628 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.356637 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.366282 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.423559 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425777 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425831 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425861 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425881 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425908 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425926 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.425992 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.426006 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.426066 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.426176 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.428551 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.449850 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526839 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526879 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526925 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526967 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.526982 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527194 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527634 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527694 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527783 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527825 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527852 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527903 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.527928 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.528665 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.532497 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.536435 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.536749 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.536859 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.537706 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.541064 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.543744 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"watcher-kuttl-api-0\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.544048 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.551470 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629273 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629337 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629578 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629627 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.629847 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.633122 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.634175 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.648970 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"watcher-kuttl-applier-0\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.659864 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.672585 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:54 crc kubenswrapper[4995]: I0126 23:28:54.748803 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.138414 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.217033 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:28:55 crc kubenswrapper[4995]: W0126 23:28:55.218969 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod137b9b9c_ff0c_461b_9731_8322ae411e99.slice/crio-4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263 WatchSource:0}: Error finding container 4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263: Status 404 returned error can't find the container with id 4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263 Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.272054 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:28:55 crc kubenswrapper[4995]: W0126 23:28:55.284388 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod754307e8_af63_4e45_8bbe_b4daf4ba4e1e.slice/crio-1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb WatchSource:0}: Error finding container 1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb: Status 404 returned error can't find the container with id 1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.886633 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerStarted","Data":"4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.888639 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerStarted","Data":"1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891269 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerStarted","Data":"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891315 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerStarted","Data":"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891326 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerStarted","Data":"eb2bd9a9347adb71b2820f1e1c4d33905377b5c57d14b319ec1266892f2f2ad3"} Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.891582 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:55 crc kubenswrapper[4995]: I0126 23:28:55.916131 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.9160877649999999 podStartE2EDuration="1.916087765s" podCreationTimestamp="2026-01-26 23:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:28:55.907794498 +0000 UTC m=+1240.072501963" watchObservedRunningTime="2026-01-26 23:28:55.916087765 +0000 UTC m=+1240.080795240" Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.898960 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerStarted","Data":"b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432"} Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.902730 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerStarted","Data":"38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8"} Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.919385 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.869192154 podStartE2EDuration="2.91935716s" podCreationTimestamp="2026-01-26 23:28:54 +0000 UTC" firstStartedPulling="2026-01-26 23:28:55.286795681 +0000 UTC m=+1239.451503166" lastFinishedPulling="2026-01-26 23:28:56.336960677 +0000 UTC m=+1240.501668172" observedRunningTime="2026-01-26 23:28:56.91491398 +0000 UTC m=+1241.079621445" watchObservedRunningTime="2026-01-26 23:28:56.91935716 +0000 UTC m=+1241.084064625" Jan 26 23:28:56 crc kubenswrapper[4995]: I0126 23:28:56.935331 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=1.826749575 podStartE2EDuration="2.935314099s" podCreationTimestamp="2026-01-26 23:28:54 +0000 UTC" firstStartedPulling="2026-01-26 23:28:55.220878596 +0000 UTC m=+1239.385586061" lastFinishedPulling="2026-01-26 23:28:56.32944312 +0000 UTC m=+1240.494150585" observedRunningTime="2026-01-26 23:28:56.931572245 +0000 UTC m=+1241.096279710" watchObservedRunningTime="2026-01-26 23:28:56.935314099 +0000 UTC m=+1241.100021564" Jan 26 23:28:58 crc kubenswrapper[4995]: I0126 23:28:58.503557 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:59 crc kubenswrapper[4995]: I0126 23:28:59.686067 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:28:59 crc kubenswrapper[4995]: I0126 23:28:59.750459 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.660076 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.673677 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.680472 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.683721 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.749586 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.777730 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.976394 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:04 crc kubenswrapper[4995]: I0126 23:29:04.979891 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:05 crc kubenswrapper[4995]: I0126 23:29:05.006650 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:05 crc kubenswrapper[4995]: I0126 23:29:05.010591 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.214505 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215123 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" containerID="cri-o://7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215206 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" containerID="cri-o://f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215250 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" containerID="cri-o://9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.215376 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" containerID="cri-o://b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.361691 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.373546 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-7k9rx"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.437345 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.438642 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.451332 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.451608 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" containerID="cri-o://b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.460826 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.478883 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.551321 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.551528 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" containerID="cri-o://7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.551907 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" containerID="cri-o://309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" gracePeriod=30 Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.576549 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.576674 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.678589 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.679476 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.679676 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.707040 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"watcherea1c-account-delete-qmvms\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:07 crc kubenswrapper[4995]: I0126 23:29:07.781285 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.032291 4995 generic.go:334] "Generic (PLEG): container finished" podID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" exitCode=143 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.032531 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerDied","Data":"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045451 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" exitCode=0 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045469 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" exitCode=2 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045478 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" exitCode=0 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045613 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" containerID="cri-o://38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8" gracePeriod=30 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045860 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045884 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.045893 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33"} Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.310654 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:08 crc kubenswrapper[4995]: W0126 23:29:08.327596 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d491283_0ac3_4f24_88c1_6a380d594919.slice/crio-4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066 WatchSource:0}: Error finding container 4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066: Status 404 returned error can't find the container with id 4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066 Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.532119 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14084ec-5346-48a7-8e93-0d4638601584" path="/var/lib/kubelet/pods/c14084ec-5346-48a7-8e93-0d4638601584/volumes" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.749078 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806136 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806203 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806235 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806296 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.806398 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") pod \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\" (UID: \"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a\") " Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.808568 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs" (OuterVolumeSpecName: "logs") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.813036 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph" (OuterVolumeSpecName: "kube-api-access-gxvph") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "kube-api-access-gxvph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.835305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.846397 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.866009 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data" (OuterVolumeSpecName: "config-data") pod "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" (UID: "5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907402 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxvph\" (UniqueName: \"kubernetes.io/projected/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-kube-api-access-gxvph\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907441 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907453 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907465 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:08 crc kubenswrapper[4995]: I0126 23:29:08.907478 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.054689 4995 generic.go:334] "Generic (PLEG): container finished" podID="1d491283-0ac3-4f24-88c1-6a380d594919" containerID="558c3ee7288987b85477ab6a956972ed10ae51e028f06cd7ca485975cd8be8ff" exitCode=0 Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.054754 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" event={"ID":"1d491283-0ac3-4f24-88c1-6a380d594919","Type":"ContainerDied","Data":"558c3ee7288987b85477ab6a956972ed10ae51e028f06cd7ca485975cd8be8ff"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.054779 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" event={"ID":"1d491283-0ac3-4f24-88c1-6a380d594919","Type":"ContainerStarted","Data":"4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056739 4995 generic.go:334] "Generic (PLEG): container finished" podID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" exitCode=0 Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056802 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerDied","Data":"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056812 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056847 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a","Type":"ContainerDied","Data":"eb2bd9a9347adb71b2820f1e1c4d33905377b5c57d14b319ec1266892f2f2ad3"} Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.056876 4995 scope.go:117] "RemoveContainer" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.087011 4995 scope.go:117] "RemoveContainer" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.102758 4995 scope.go:117] "RemoveContainer" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.103158 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826\": container with ID starting with 309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826 not found: ID does not exist" containerID="309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.103200 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826"} err="failed to get container status \"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826\": rpc error: code = NotFound desc = could not find container \"309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826\": container with ID starting with 309c334b850c836051ec61aa0cf5d7f56e9e89dd11e7041f40605cacf5afc826 not found: ID does not exist" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.103226 4995 scope.go:117] "RemoveContainer" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.103446 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22\": container with ID starting with 7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22 not found: ID does not exist" containerID="7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.103464 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22"} err="failed to get container status \"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22\": rpc error: code = NotFound desc = could not find container \"7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22\": container with ID starting with 7e7e26880ee7598186f6314bae5631228276fead4253e944dfa4d0d2495b6a22 not found: ID does not exist" Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.107664 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:09 crc kubenswrapper[4995]: I0126 23:29:09.114427 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.755950 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.762538 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.764410 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:09 crc kubenswrapper[4995]: E0126 23:29:09.764472 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067219 4995 generic.go:334] "Generic (PLEG): container finished" podID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerID="38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8" exitCode=0 Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067295 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerDied","Data":"38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8"} Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067337 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"137b9b9c-ff0c-461b-9731-8322ae411e99","Type":"ContainerDied","Data":"4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263"} Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.067349 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ac0957f2ab63623cc81f58778223691d3bf275c4fb81b1740c6d15825d4a263" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.097679 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.230738 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231233 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs" (OuterVolumeSpecName: "logs") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231394 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231431 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231539 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231588 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") pod \"137b9b9c-ff0c-461b-9731-8322ae411e99\" (UID: \"137b9b9c-ff0c-461b-9731-8322ae411e99\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.231984 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/137b9b9c-ff0c-461b-9731-8322ae411e99-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.243341 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g" (OuterVolumeSpecName: "kube-api-access-8n57g") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "kube-api-access-8n57g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.262089 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.290407 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.310345 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data" (OuterVolumeSpecName: "config-data") pod "137b9b9c-ff0c-461b-9731-8322ae411e99" (UID: "137b9b9c-ff0c-461b-9731-8322ae411e99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.337932 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.337983 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.337999 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n57g\" (UniqueName: \"kubernetes.io/projected/137b9b9c-ff0c-461b-9731-8322ae411e99-kube-api-access-8n57g\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.338011 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/137b9b9c-ff0c-461b-9731-8322ae411e99-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.449393 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.530128 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" path="/var/lib/kubelet/pods/5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a/volumes" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.642163 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") pod \"1d491283-0ac3-4f24-88c1-6a380d594919\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.642593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") pod \"1d491283-0ac3-4f24-88c1-6a380d594919\" (UID: \"1d491283-0ac3-4f24-88c1-6a380d594919\") " Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.643253 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d491283-0ac3-4f24-88c1-6a380d594919" (UID: "1d491283-0ac3-4f24-88c1-6a380d594919"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.650864 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt" (OuterVolumeSpecName: "kube-api-access-jcfxt") pod "1d491283-0ac3-4f24-88c1-6a380d594919" (UID: "1d491283-0ac3-4f24-88c1-6a380d594919"). InnerVolumeSpecName "kube-api-access-jcfxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.744290 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcfxt\" (UniqueName: \"kubernetes.io/projected/1d491283-0ac3-4f24-88c1-6a380d594919-kube-api-access-jcfxt\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.744332 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d491283-0ac3-4f24-88c1-6a380d594919-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.894288 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.894379 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.894443 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.895460 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:29:10 crc kubenswrapper[4995]: I0126 23:29:10.895559 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69" gracePeriod=600 Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.093560 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69" exitCode=0 Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.093645 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69"} Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.094026 4995 scope.go:117] "RemoveContainer" containerID="c18e947f3e89f6e4fe1ccdfb2540e67e2ab73a82cdb82488bfa3e6e58cba1576" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.102380 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.102412 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.102415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherea1c-account-delete-qmvms" event={"ID":"1d491283-0ac3-4f24-88c1-6a380d594919","Type":"ContainerDied","Data":"4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066"} Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.103611 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d829ac560b84f8a06da97bc770ac8e3a715be7116d2f0d12e39059dd8d28066" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.135510 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.145320 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.725844 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760582 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760624 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760646 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760697 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760851 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760901 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.760929 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") pod \"bb374cf7-1f64-4981-8500-45743b6c245d\" (UID: \"bb374cf7-1f64-4981-8500-45743b6c245d\") " Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.762009 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.764000 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.769440 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq" (OuterVolumeSpecName: "kube-api-access-xwvvq") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "kube-api-access-xwvvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.772597 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts" (OuterVolumeSpecName: "scripts") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.814657 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.815303 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.854081 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862651 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862695 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwvvq\" (UniqueName: \"kubernetes.io/projected/bb374cf7-1f64-4981-8500-45743b6c245d-kube-api-access-xwvvq\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862709 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862720 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bb374cf7-1f64-4981-8500-45743b6c245d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862733 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862745 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.862757 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.896297 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data" (OuterVolumeSpecName: "config-data") pod "bb374cf7-1f64-4981-8500-45743b6c245d" (UID: "bb374cf7-1f64-4981-8500-45743b6c245d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:11 crc kubenswrapper[4995]: I0126 23:29:11.963732 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb374cf7-1f64-4981-8500-45743b6c245d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.131541 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570"} Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.134983 4995 generic.go:334] "Generic (PLEG): container finished" podID="bb374cf7-1f64-4981-8500-45743b6c245d" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" exitCode=0 Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.135221 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885"} Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.135316 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"bb374cf7-1f64-4981-8500-45743b6c245d","Type":"ContainerDied","Data":"dbe0ac9e615dc8b84fc279cb1855295fa12e48224c261aad6672dc012a0042f7"} Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.135373 4995 scope.go:117] "RemoveContainer" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.136340 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.162267 4995 scope.go:117] "RemoveContainer" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.191221 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.205165 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213216 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213574 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213593 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213604 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213611 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213624 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213631 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213644 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213650 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213659 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213665 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213676 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" containerName="mariadb-account-delete" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213681 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" containerName="mariadb-account-delete" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213694 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213700 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.213708 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213738 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213932 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-notification-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213948 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-api" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213961 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fc0e7a0-07a2-444a-8bdd-55d30d53fa3a" containerName="watcher-kuttl-api-log" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213974 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" containerName="mariadb-account-delete" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213985 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="proxy-httpd" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.213992 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" containerName="watcher-decision-engine" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.214002 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="sg-core" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.214013 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" containerName="ceilometer-central-agent" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.215782 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.219643 4995 scope.go:117] "RemoveContainer" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.219904 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.219959 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.220245 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.235757 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.263362 4995 scope.go:117] "RemoveContainer" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270528 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270604 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270638 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270667 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270757 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270787 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.270835 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.288659 4995 scope.go:117] "RemoveContainer" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.289446 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a\": container with ID starting with b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a not found: ID does not exist" containerID="b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289495 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a"} err="failed to get container status \"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a\": rpc error: code = NotFound desc = could not find container \"b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a\": container with ID starting with b603b71463899a8e0be5c45135e2d5679df92672e7b552773fc5249cd34d369a not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289528 4995 scope.go:117] "RemoveContainer" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.289866 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0\": container with ID starting with f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0 not found: ID does not exist" containerID="f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289890 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0"} err="failed to get container status \"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0\": rpc error: code = NotFound desc = could not find container \"f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0\": container with ID starting with f25a9269df1c20d299b975eadd012967684983c243d41958a695170dae7817f0 not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.289908 4995 scope.go:117] "RemoveContainer" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.290344 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885\": container with ID starting with 9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885 not found: ID does not exist" containerID="9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.290372 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885"} err="failed to get container status \"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885\": rpc error: code = NotFound desc = could not find container \"9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885\": container with ID starting with 9313b45f54b4edd9947bb1b5450fba520541d623625d49a070336c3328f76885 not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.290391 4995 scope.go:117] "RemoveContainer" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" Jan 26 23:29:12 crc kubenswrapper[4995]: E0126 23:29:12.290670 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33\": container with ID starting with 7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33 not found: ID does not exist" containerID="7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.290698 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33"} err="failed to get container status \"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33\": rpc error: code = NotFound desc = could not find container \"7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33\": container with ID starting with 7c08c1fd9eed26c7bfc45fb3b2acd9908edce29ac20f3fbd2d52559a94442a33 not found: ID does not exist" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.371951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372020 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372058 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372079 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372117 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372137 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372172 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.372198 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.373574 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.373798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.376644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.377773 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.377857 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.378709 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.380974 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.392546 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"ceilometer-0\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.464088 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.470640 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-hmlpp"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.498232 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.509287 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.527811 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="137b9b9c-ff0c-461b-9731-8322ae411e99" path="/var/lib/kubelet/pods/137b9b9c-ff0c-461b-9731-8322ae411e99/volumes" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.528416 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81de5920-673a-4656-812a-cd9418a924ad" path="/var/lib/kubelet/pods/81de5920-673a-4656-812a-cd9418a924ad/volumes" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.528912 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb374cf7-1f64-4981-8500-45743b6c245d" path="/var/lib/kubelet/pods/bb374cf7-1f64-4981-8500-45743b6c245d/volumes" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.530549 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherea1c-account-delete-qmvms"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.530610 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-ea1c-account-create-update-9lt5d"] Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.537814 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:12 crc kubenswrapper[4995]: I0126 23:29:12.996697 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:13 crc kubenswrapper[4995]: W0126 23:29:13.010567 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b175699_64e9_4d8e_a89b_6a80468dd954.slice/crio-59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a WatchSource:0}: Error finding container 59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a: Status 404 returned error can't find the container with id 59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.014383 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.151692 4995 generic.go:334] "Generic (PLEG): container finished" podID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" exitCode=0 Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.151783 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerDied","Data":"b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432"} Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.153791 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a"} Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.173155 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287318 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287378 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287502 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.287570 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") pod \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\" (UID: \"754307e8-af63-4e45-8bbe-b4daf4ba4e1e\") " Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.288361 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs" (OuterVolumeSpecName: "logs") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.289077 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.295728 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9" (OuterVolumeSpecName: "kube-api-access-wb9q9") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "kube-api-access-wb9q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.313568 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.347217 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data" (OuterVolumeSpecName: "config-data") pod "754307e8-af63-4e45-8bbe-b4daf4ba4e1e" (UID: "754307e8-af63-4e45-8bbe-b4daf4ba4e1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.390684 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb9q9\" (UniqueName: \"kubernetes.io/projected/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-kube-api-access-wb9q9\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.390730 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:13 crc kubenswrapper[4995]: I0126 23:29:13.390745 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/754307e8-af63-4e45-8bbe-b4daf4ba4e1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.161489 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff"} Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.163832 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"754307e8-af63-4e45-8bbe-b4daf4ba4e1e","Type":"ContainerDied","Data":"1192b13cd713adc467415e40fdefdf9c5c74e713846c370c81b0cb0acaaac6eb"} Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.163865 4995 scope.go:117] "RemoveContainer" containerID="b7fd57376a47e1224007d3926dfa1af75748ccd3858d7eba6448a5fef6ce6432" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.163967 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.208158 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.216624 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.526375 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d491283-0ac3-4f24-88c1-6a380d594919" path="/var/lib/kubelet/pods/1d491283-0ac3-4f24-88c1-6a380d594919/volumes" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.527163 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26594adb-ad3b-4555-a2a2-085ac874b80f" path="/var/lib/kubelet/pods/26594adb-ad3b-4555-a2a2-085ac874b80f/volumes" Jan 26 23:29:14 crc kubenswrapper[4995]: I0126 23:29:14.527666 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" path="/var/lib/kubelet/pods/754307e8-af63-4e45-8bbe-b4daf4ba4e1e/volumes" Jan 26 23:29:15 crc kubenswrapper[4995]: I0126 23:29:15.180092 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32"} Jan 26 23:29:16 crc kubenswrapper[4995]: I0126 23:29:16.197205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847"} Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.207487 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerStarted","Data":"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d"} Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.207884 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.244145 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.416647781 podStartE2EDuration="5.244121981s" podCreationTimestamp="2026-01-26 23:29:12 +0000 UTC" firstStartedPulling="2026-01-26 23:29:13.014049364 +0000 UTC m=+1257.178756839" lastFinishedPulling="2026-01-26 23:29:16.841523534 +0000 UTC m=+1261.006231039" observedRunningTime="2026-01-26 23:29:17.241543246 +0000 UTC m=+1261.406250731" watchObservedRunningTime="2026-01-26 23:29:17.244121981 +0000 UTC m=+1261.408829446" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.660581 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:17 crc kubenswrapper[4995]: E0126 23:29:17.660993 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.661013 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.661226 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="754307e8-af63-4e45-8bbe-b4daf4ba4e1e" containerName="watcher-applier" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.661902 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.677256 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.765915 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.767095 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.769378 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.783771 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.797634 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.797718 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899052 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899147 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899176 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.899212 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.900045 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.918623 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"watcher-db-create-6ch9m\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:17 crc kubenswrapper[4995]: I0126 23:29:17.976091 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.001430 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.001511 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.002551 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.034328 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"watcher-17d4-account-create-update-dj9g9\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.080715 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.685608 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:18 crc kubenswrapper[4995]: W0126 23:29:18.690803 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0710d60_452a_4ffb_80e7_cf4b95c4b93c.slice/crio-68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473 WatchSource:0}: Error finding container 68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473: Status 404 returned error can't find the container with id 68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473 Jan 26 23:29:18 crc kubenswrapper[4995]: W0126 23:29:18.769617 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb61ff94_84e4_46ff_affd_1d1fd691a219.slice/crio-28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f WatchSource:0}: Error finding container 28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f: Status 404 returned error can't find the container with id 28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f Jan 26 23:29:18 crc kubenswrapper[4995]: I0126 23:29:18.770574 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.244719 4995 generic.go:334] "Generic (PLEG): container finished" podID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerID="54026a5c7938c99685025eb0d6f422b9c6952be4668651d7bb950ada4b54c826" exitCode=0 Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.244803 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" event={"ID":"db61ff94-84e4-46ff-affd-1d1fd691a219","Type":"ContainerDied","Data":"54026a5c7938c99685025eb0d6f422b9c6952be4668651d7bb950ada4b54c826"} Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.244846 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" event={"ID":"db61ff94-84e4-46ff-affd-1d1fd691a219","Type":"ContainerStarted","Data":"28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f"} Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.248651 4995 generic.go:334] "Generic (PLEG): container finished" podID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerID="8fd006c327ce56252705ed20528a00dcfa084ed04bd5e467803791a1f4ae0733" exitCode=0 Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.248730 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6ch9m" event={"ID":"c0710d60-452a-4ffb-80e7-cf4b95c4b93c","Type":"ContainerDied","Data":"8fd006c327ce56252705ed20528a00dcfa084ed04bd5e467803791a1f4ae0733"} Jan 26 23:29:19 crc kubenswrapper[4995]: I0126 23:29:19.248777 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6ch9m" event={"ID":"c0710d60-452a-4ffb-80e7-cf4b95c4b93c","Type":"ContainerStarted","Data":"68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473"} Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.735987 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.751598 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854718 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") pod \"db61ff94-84e4-46ff-affd-1d1fd691a219\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854836 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") pod \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854872 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") pod \"db61ff94-84e4-46ff-affd-1d1fd691a219\" (UID: \"db61ff94-84e4-46ff-affd-1d1fd691a219\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.854918 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") pod \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\" (UID: \"c0710d60-452a-4ffb-80e7-cf4b95c4b93c\") " Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.856301 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c0710d60-452a-4ffb-80e7-cf4b95c4b93c" (UID: "c0710d60-452a-4ffb-80e7-cf4b95c4b93c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.856309 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db61ff94-84e4-46ff-affd-1d1fd691a219" (UID: "db61ff94-84e4-46ff-affd-1d1fd691a219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.863374 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd" (OuterVolumeSpecName: "kube-api-access-8cvgd") pod "db61ff94-84e4-46ff-affd-1d1fd691a219" (UID: "db61ff94-84e4-46ff-affd-1d1fd691a219"). InnerVolumeSpecName "kube-api-access-8cvgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.865259 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2" (OuterVolumeSpecName: "kube-api-access-6l4b2") pod "c0710d60-452a-4ffb-80e7-cf4b95c4b93c" (UID: "c0710d60-452a-4ffb-80e7-cf4b95c4b93c"). InnerVolumeSpecName "kube-api-access-6l4b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957189 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957244 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db61ff94-84e4-46ff-affd-1d1fd691a219-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957267 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l4b2\" (UniqueName: \"kubernetes.io/projected/c0710d60-452a-4ffb-80e7-cf4b95c4b93c-kube-api-access-6l4b2\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:20 crc kubenswrapper[4995]: I0126 23:29:20.957290 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cvgd\" (UniqueName: \"kubernetes.io/projected/db61ff94-84e4-46ff-affd-1d1fd691a219-kube-api-access-8cvgd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.268029 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-6ch9m" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.268074 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-6ch9m" event={"ID":"c0710d60-452a-4ffb-80e7-cf4b95c4b93c","Type":"ContainerDied","Data":"68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473"} Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.268671 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f2a7ce8aa818e717404ca12002ff3925a5d8d9603dabff3f0d6f462f935473" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.270903 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" event={"ID":"db61ff94-84e4-46ff-affd-1d1fd691a219","Type":"ContainerDied","Data":"28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f"} Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.271312 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28c67c5462134ea89367f6a2ea623c7af940d0f839f46931d6174c2e9e2d517f" Jan 26 23:29:21 crc kubenswrapper[4995]: I0126 23:29:21.270986 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103094 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:23 crc kubenswrapper[4995]: E0126 23:29:23.103448 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerName="mariadb-database-create" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103461 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerName="mariadb-database-create" Jan 26 23:29:23 crc kubenswrapper[4995]: E0126 23:29:23.103475 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerName="mariadb-account-create-update" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103481 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerName="mariadb-account-create-update" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103630 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" containerName="mariadb-account-create-update" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.103648 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" containerName="mariadb-database-create" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.104190 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.106556 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.106777 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6ndh2" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.118685 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.209286 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.209335 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.209734 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.210049 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311721 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311773 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311834 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.311890 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.317671 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.317955 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.328024 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.330568 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"watcher-kuttl-db-sync-k4kxp\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.425758 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:23 crc kubenswrapper[4995]: I0126 23:29:23.916378 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:24 crc kubenswrapper[4995]: I0126 23:29:24.297424 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerStarted","Data":"fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9"} Jan 26 23:29:24 crc kubenswrapper[4995]: I0126 23:29:24.297788 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerStarted","Data":"4955a65b2e034b25c1ec838fdd45111ea48a617f4dccc4e382e42571a24a1c90"} Jan 26 23:29:24 crc kubenswrapper[4995]: I0126 23:29:24.320168 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" podStartSLOduration=1.320142674 podStartE2EDuration="1.320142674s" podCreationTimestamp="2026-01-26 23:29:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:24.313898478 +0000 UTC m=+1268.478605963" watchObservedRunningTime="2026-01-26 23:29:24.320142674 +0000 UTC m=+1268.484850149" Jan 26 23:29:26 crc kubenswrapper[4995]: I0126 23:29:26.313414 4995 generic.go:334] "Generic (PLEG): container finished" podID="fe82d30b-18d6-486f-9494-034434237785" containerID="fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9" exitCode=0 Jan 26 23:29:26 crc kubenswrapper[4995]: I0126 23:29:26.313498 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerDied","Data":"fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9"} Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.692950 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.889432 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.890076 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.890189 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.890333 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") pod \"fe82d30b-18d6-486f-9494-034434237785\" (UID: \"fe82d30b-18d6-486f-9494-034434237785\") " Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.895345 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.899513 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q" (OuterVolumeSpecName: "kube-api-access-2xq7q") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "kube-api-access-2xq7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.925526 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.948489 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data" (OuterVolumeSpecName: "config-data") pod "fe82d30b-18d6-486f-9494-034434237785" (UID: "fe82d30b-18d6-486f-9494-034434237785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992719 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992768 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xq7q\" (UniqueName: \"kubernetes.io/projected/fe82d30b-18d6-486f-9494-034434237785-kube-api-access-2xq7q\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992790 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:27 crc kubenswrapper[4995]: I0126 23:29:27.992807 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe82d30b-18d6-486f-9494-034434237785-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.332680 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" event={"ID":"fe82d30b-18d6-486f-9494-034434237785","Type":"ContainerDied","Data":"4955a65b2e034b25c1ec838fdd45111ea48a617f4dccc4e382e42571a24a1c90"} Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.332778 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4955a65b2e034b25c1ec838fdd45111ea48a617f4dccc4e382e42571a24a1c90" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.333197 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.714713 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: E0126 23:29:28.715083 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe82d30b-18d6-486f-9494-034434237785" containerName="watcher-kuttl-db-sync" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.715098 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe82d30b-18d6-486f-9494-034434237785" containerName="watcher-kuttl-db-sync" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.715295 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe82d30b-18d6-486f-9494-034434237785" containerName="watcher-kuttl-db-sync" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.716075 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.718229 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.718458 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.719079 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.719125 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-6ndh2" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.722915 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.724028 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.726359 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777615 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777670 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777722 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777752 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777772 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777806 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.777828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.778181 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.792565 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.816152 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.817446 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.819679 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.832298 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.878902 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.878951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.878993 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879031 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879047 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879061 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879082 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879143 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879169 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879188 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879219 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879235 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879251 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879274 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.879296 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.882849 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.885710 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.886050 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.886458 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.886980 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.897412 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.901405 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"watcher-kuttl-api-0\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980374 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980458 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980483 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980502 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980546 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980580 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980634 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.980664 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.981167 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.982761 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.986946 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.987146 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.987168 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.987408 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:28 crc kubenswrapper[4995]: I0126 23:29:28.997640 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.003226 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"watcher-kuttl-applier-0\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.003965 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.074051 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.074641 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.131381 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.544490 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:29 crc kubenswrapper[4995]: W0126 23:29:29.548456 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bb37bcc_61c1_4154_8ee5_991a34693b5d.slice/crio-078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9 WatchSource:0}: Error finding container 078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9: Status 404 returned error can't find the container with id 078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9 Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.553399 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:29 crc kubenswrapper[4995]: W0126 23:29:29.553900 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7295582_a245_4bd4_928f_8cbaa456efc7.slice/crio-190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276 WatchSource:0}: Error finding container 190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276: Status 404 returned error can't find the container with id 190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276 Jan 26 23:29:29 crc kubenswrapper[4995]: I0126 23:29:29.724630 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.349169 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerStarted","Data":"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.349568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerStarted","Data":"dbdc29f4e59ea5432ac8acd8aac8655730d2e92783170e75a0e2ef756183ec9c"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351818 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerStarted","Data":"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351866 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerStarted","Data":"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351883 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerStarted","Data":"190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.351904 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.354174 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerStarted","Data":"a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.354224 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerStarted","Data":"078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9"} Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.369656 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.369634811 podStartE2EDuration="2.369634811s" podCreationTimestamp="2026-01-26 23:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:30.363292833 +0000 UTC m=+1274.528000298" watchObservedRunningTime="2026-01-26 23:29:30.369634811 +0000 UTC m=+1274.534342276" Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.386521 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.386501722 podStartE2EDuration="2.386501722s" podCreationTimestamp="2026-01-26 23:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:30.381251191 +0000 UTC m=+1274.545958656" watchObservedRunningTime="2026-01-26 23:29:30.386501722 +0000 UTC m=+1274.551209187" Jan 26 23:29:30 crc kubenswrapper[4995]: I0126 23:29:30.411159 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.411135616 podStartE2EDuration="2.411135616s" podCreationTimestamp="2026-01-26 23:29:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:30.403055945 +0000 UTC m=+1274.567763410" watchObservedRunningTime="2026-01-26 23:29:30.411135616 +0000 UTC m=+1274.575843091" Jan 26 23:29:32 crc kubenswrapper[4995]: I0126 23:29:32.424143 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:34 crc kubenswrapper[4995]: I0126 23:29:34.075913 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:34 crc kubenswrapper[4995]: I0126 23:29:34.077390 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.075499 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.076164 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.093625 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.110262 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.132885 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.170710 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.437946 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.466390 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.476532 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:39 crc kubenswrapper[4995]: I0126 23:29:39.489384 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.558484 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.558854 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" containerID="cri-o://eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.558969 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" containerID="cri-o://9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.559001 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" containerID="cri-o://f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.559209 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" containerID="cri-o://bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" gracePeriod=30 Jan 26 23:29:41 crc kubenswrapper[4995]: I0126 23:29:41.572176 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.137:3000/\": EOF" Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.461902 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" exitCode=0 Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462152 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" exitCode=2 Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462160 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" exitCode=0 Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.461967 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d"} Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462186 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847"} Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.462197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff"} Jan 26 23:29:42 crc kubenswrapper[4995]: I0126 23:29:42.538995 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.137:3000/\": dial tcp 10.217.0.137:3000: connect: connection refused" Jan 26 23:29:43 crc kubenswrapper[4995]: I0126 23:29:43.697442 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:43 crc kubenswrapper[4995]: I0126 23:29:43.697910 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" containerID="cri-o://f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" gracePeriod=30 Jan 26 23:29:43 crc kubenswrapper[4995]: I0126 23:29:43.698020 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" containerID="cri-o://063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" gracePeriod=30 Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.504908 4995 generic.go:334] "Generic (PLEG): container finished" podID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" exitCode=143 Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.504970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerDied","Data":"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420"} Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.564682 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.141:9322/\": read tcp 10.217.0.2:53466->10.217.0.141:9322: read: connection reset by peer" Jan 26 23:29:44 crc kubenswrapper[4995]: I0126 23:29:44.564765 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"https://10.217.0.141:9322/\": read tcp 10.217.0.2:53464->10.217.0.141:9322: read: connection reset by peer" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.010993 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092117 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092254 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092286 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092479 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092522 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092584 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092637 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") pod \"d7295582-a245-4bd4-928f-8cbaa456efc7\" (UID: \"d7295582-a245-4bd4-928f-8cbaa456efc7\") " Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.092991 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs" (OuterVolumeSpecName: "logs") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.093272 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7295582-a245-4bd4-928f-8cbaa456efc7-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.103996 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr" (OuterVolumeSpecName: "kube-api-access-b7vdr") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "kube-api-access-b7vdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.137943 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.186748 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.194831 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.194868 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.194880 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7vdr\" (UniqueName: \"kubernetes.io/projected/d7295582-a245-4bd4-928f-8cbaa456efc7-kube-api-access-b7vdr\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.203402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data" (OuterVolumeSpecName: "config-data") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.209669 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.210159 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d7295582-a245-4bd4-928f-8cbaa456efc7" (UID: "d7295582-a245-4bd4-928f-8cbaa456efc7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.296597 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.296634 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.296674 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7295582-a245-4bd4-928f-8cbaa456efc7-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580250 4995 generic.go:334] "Generic (PLEG): container finished" podID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" exitCode=0 Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerDied","Data":"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6"} Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580329 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580346 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"d7295582-a245-4bd4-928f-8cbaa456efc7","Type":"ContainerDied","Data":"190da8f21718427e7a4d0063c224f2651e39a06b4be88eca6ab321bbd9023276"} Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.580366 4995 scope.go:117] "RemoveContainer" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.638878 4995 scope.go:117] "RemoveContainer" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.670219 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.678616 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.703926 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.704683 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.704709 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.704735 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.704745 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.705017 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-kuttl-api-log" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.705047 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" containerName="watcher-api" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.705278 4995 scope.go:117] "RemoveContainer" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.706366 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.707165 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6\": container with ID starting with 063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6 not found: ID does not exist" containerID="063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.707216 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6"} err="failed to get container status \"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6\": rpc error: code = NotFound desc = could not find container \"063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6\": container with ID starting with 063403ca1a2058f3694a4f88c73ffb4620b31dacf557174b64cd8c2a2681efa6 not found: ID does not exist" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.707251 4995 scope.go:117] "RemoveContainer" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.711831 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.712076 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.712217 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714157 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714234 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714263 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714306 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.714340 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.718396 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.718475 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: E0126 23:29:45.743469 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420\": container with ID starting with f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420 not found: ID does not exist" containerID="f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.743530 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420"} err="failed to get container status \"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420\": rpc error: code = NotFound desc = could not find container \"f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420\": container with ID starting with f956ae214518c2eaa7a9c2d69d50714911d85a41f453ba678dfc407679d25420 not found: ID does not exist" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.773974 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827490 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827536 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827560 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827583 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827629 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.827649 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.828728 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838258 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838293 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838820 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.838873 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.848770 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.880830 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"watcher-kuttl-api-0\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:45 crc kubenswrapper[4995]: I0126 23:29:45.920843 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.069562 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7295582_a245_4bd4_928f_8cbaa456efc7.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b175699_64e9_4d8e_a89b_6a80468dd954.slice/crio-conmon-f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32.scope\": RecentStats: unable to find data in memory cache]" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.484857 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.550928 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7295582-a245-4bd4-928f-8cbaa456efc7" path="/var/lib/kubelet/pods/d7295582-a245-4bd4-928f-8cbaa456efc7/volumes" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.566057 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.609933 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerStarted","Data":"b1ed431fa560523554c77fc1ace70d32844eb1774bdcc487dd885dc3a028b347"} Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613436 4995 generic.go:334] "Generic (PLEG): container finished" podID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" exitCode=0 Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613477 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32"} Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613500 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"4b175699-64e9-4d8e-a89b-6a80468dd954","Type":"ContainerDied","Data":"59a0d6316551ef4185a6e6468dc1b7de864944c245e1817ff6a911a9105c2b8a"} Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613524 4995 scope.go:117] "RemoveContainer" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.613642 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.632868 4995 scope.go:117] "RemoveContainer" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.648590 4995 scope.go:117] "RemoveContainer" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.667768 4995 scope.go:117] "RemoveContainer" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.701147 4995 scope.go:117] "RemoveContainer" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.701986 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d\": container with ID starting with bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d not found: ID does not exist" containerID="bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702059 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d"} err="failed to get container status \"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d\": rpc error: code = NotFound desc = could not find container \"bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d\": container with ID starting with bf8f4d4542f5ad1e7380f2db23560267ecc105e05ab2b92e74b0732b6474df7d not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702119 4995 scope.go:117] "RemoveContainer" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.702740 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847\": container with ID starting with 9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847 not found: ID does not exist" containerID="9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702805 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847"} err="failed to get container status \"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847\": rpc error: code = NotFound desc = could not find container \"9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847\": container with ID starting with 9f560d71316e0a2649ae03d7e5cd702d8a021dc17e64e96064b6cd0088260847 not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.702837 4995 scope.go:117] "RemoveContainer" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.703121 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32\": container with ID starting with f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32 not found: ID does not exist" containerID="f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.703148 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32"} err="failed to get container status \"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32\": rpc error: code = NotFound desc = could not find container \"f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32\": container with ID starting with f8a74f9fe3e88d5ae89c4b44e648164dda7e3f3af197331941ff19e13b417b32 not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.703160 4995 scope.go:117] "RemoveContainer" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.703374 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff\": container with ID starting with eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff not found: ID does not exist" containerID="eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.703398 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff"} err="failed to get container status \"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff\": rpc error: code = NotFound desc = could not find container \"eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff\": container with ID starting with eae574c2be0c1c60424cd6270be0ebc5fb1eaf6bbae715327f3759d95c2924ff not found: ID does not exist" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.750430 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.751257 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753647 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753869 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753934 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.753983 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754021 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754073 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754123 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") pod \"4b175699-64e9-4d8e-a89b-6a80468dd954\" (UID: \"4b175699-64e9-4d8e-a89b-6a80468dd954\") " Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754268 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts" (OuterVolumeSpecName: "scripts") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754827 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.754982 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.755004 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.755017 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4b175699-64e9-4d8e-a89b-6a80468dd954-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.757221 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj" (OuterVolumeSpecName: "kube-api-access-64vhj") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "kube-api-access-64vhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.781277 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.802039 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.825737 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856667 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856713 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856728 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64vhj\" (UniqueName: \"kubernetes.io/projected/4b175699-64e9-4d8e-a89b-6a80468dd954-kube-api-access-64vhj\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.856739 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.872129 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data" (OuterVolumeSpecName: "config-data") pod "4b175699-64e9-4d8e-a89b-6a80468dd954" (UID: "4b175699-64e9-4d8e-a89b-6a80468dd954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.960587 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b175699-64e9-4d8e-a89b-6a80468dd954-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.968091 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.984287 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993403 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993847 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993869 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993889 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993897 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993910 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993921 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" Jan 26 23:29:46 crc kubenswrapper[4995]: E0126 23:29:46.993943 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.993950 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994180 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="proxy-httpd" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994204 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-notification-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994215 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="ceilometer-central-agent" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.994225 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" containerName="sg-core" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.996014 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:46 crc kubenswrapper[4995]: I0126 23:29:46.998492 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.000699 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.001029 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.017916 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062398 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062536 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062756 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.062941 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.063045 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.063275 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164613 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164675 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164702 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164750 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164840 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164862 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164901 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.164937 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.165118 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.165287 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.169160 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.171409 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.171977 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.174636 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.188448 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.188727 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.312219 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.440087 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.462750 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-k4kxp"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.505933 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.511705 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.538528 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.562490 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.563952 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" containerID="cri-o://29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" gracePeriod=30 Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.573145 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.573315 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.625808 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.645219 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerStarted","Data":"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6"} Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.645268 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerStarted","Data":"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7"} Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.645666 4995 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-api-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-6ndh2\" not found" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.646494 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.663905 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.664324 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" containerID="cri-o://a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" gracePeriod=30 Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.673850 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.674689 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.675257 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: E0126 23:29:47.675375 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:47 crc kubenswrapper[4995]: E0126 23:29:47.675419 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:48.175402557 +0000 UTC m=+1292.340110022 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.686205 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.686179216 podStartE2EDuration="2.686179216s" podCreationTimestamp="2026-01-26 23:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:47.683813867 +0000 UTC m=+1291.848521322" watchObservedRunningTime="2026-01-26 23:29:47.686179216 +0000 UTC m=+1291.850886681" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.705179 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"watcher17d4-account-delete-8vn6w\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.860554 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:47 crc kubenswrapper[4995]: I0126 23:29:47.904415 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:47 crc kubenswrapper[4995]: W0126 23:29:47.907898 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5779d6d0_6f61_467c_b521_a16e0201f7ed.slice/crio-15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83 WatchSource:0}: Error finding container 15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83: Status 404 returned error can't find the container with id 15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83 Jan 26 23:29:48 crc kubenswrapper[4995]: E0126 23:29:48.183076 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:48 crc kubenswrapper[4995]: E0126 23:29:48.183518 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:49.183501816 +0000 UTC m=+1293.348209281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.308353 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.526752 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b175699-64e9-4d8e-a89b-6a80468dd954" path="/var/lib/kubelet/pods/4b175699-64e9-4d8e-a89b-6a80468dd954/volumes" Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.528043 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe82d30b-18d6-486f-9494-034434237785" path="/var/lib/kubelet/pods/fe82d30b-18d6-486f-9494-034434237785/volumes" Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.732926 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.732981 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.739001 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerStarted","Data":"628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.739034 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" containerID="cri-o://46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" gracePeriod=30 Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.739090 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerStarted","Data":"9b010aec3dd4bdbe6aad29d1ac3dd99a9876c86815c99c09402282e05b400799"} Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.742156 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" containerID="cri-o://1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" gracePeriod=30 Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.746462 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": EOF" Jan 26 23:29:48 crc kubenswrapper[4995]: I0126 23:29:48.768679 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" podStartSLOduration=1.768659638 podStartE2EDuration="1.768659638s" podCreationTimestamp="2026-01-26 23:29:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:29:48.763581282 +0000 UTC m=+1292.928288747" watchObservedRunningTime="2026-01-26 23:29:48.768659638 +0000 UTC m=+1292.933367103" Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.080174 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.090257 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.093177 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.093215 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.216487 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:49 crc kubenswrapper[4995]: E0126 23:29:49.216561 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:51.216545975 +0000 UTC m=+1295.381253440 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.746308 4995 generic.go:334] "Generic (PLEG): container finished" podID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" exitCode=143 Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.746381 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerDied","Data":"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7"} Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.747793 4995 generic.go:334] "Generic (PLEG): container finished" podID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerID="628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220" exitCode=0 Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.747848 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerDied","Data":"628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220"} Jan 26 23:29:49 crc kubenswrapper[4995]: I0126 23:29:49.749551 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8"} Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.068623 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.758238 4995 generic.go:334] "Generic (PLEG): container finished" podID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" exitCode=0 Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.758328 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerDied","Data":"a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044"} Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.760564 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e"} Jan 26 23:29:50 crc kubenswrapper[4995]: I0126 23:29:50.921625 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.231360 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.240870 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:51 crc kubenswrapper[4995]: E0126 23:29:51.271645 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:51 crc kubenswrapper[4995]: E0126 23:29:51.271718 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data podName:08c10a90-cf36-46d8-9d0a-8152c08eccf9 nodeName:}" failed. No retries permitted until 2026-01-26 23:29:55.271701848 +0000 UTC m=+1299.436409313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data") pod "watcher-kuttl-api-0" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372501 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") pod \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372624 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372647 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372728 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372757 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") pod \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\" (UID: \"2bb37bcc-61c1-4154-8ee5-991a34693b5d\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.372809 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") pod \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\" (UID: \"969a304d-b02f-40b9-b439-9f3f5b88ccfa\") " Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.373600 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs" (OuterVolumeSpecName: "logs") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.373981 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "969a304d-b02f-40b9-b439-9f3f5b88ccfa" (UID: "969a304d-b02f-40b9-b439-9f3f5b88ccfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.377291 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6" (OuterVolumeSpecName: "kube-api-access-w2pl6") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "kube-api-access-w2pl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.377794 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8" (OuterVolumeSpecName: "kube-api-access-2grq8") pod "969a304d-b02f-40b9-b439-9f3f5b88ccfa" (UID: "969a304d-b02f-40b9-b439-9f3f5b88ccfa"). InnerVolumeSpecName "kube-api-access-2grq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.404283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.428312 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data" (OuterVolumeSpecName: "config-data") pod "2bb37bcc-61c1-4154-8ee5-991a34693b5d" (UID: "2bb37bcc-61c1-4154-8ee5-991a34693b5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475616 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475658 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2bb37bcc-61c1-4154-8ee5-991a34693b5d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475674 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2pl6\" (UniqueName: \"kubernetes.io/projected/2bb37bcc-61c1-4154-8ee5-991a34693b5d-kube-api-access-w2pl6\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475687 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bb37bcc-61c1-4154-8ee5-991a34693b5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475701 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2grq8\" (UniqueName: \"kubernetes.io/projected/969a304d-b02f-40b9-b439-9f3f5b88ccfa-kube-api-access-2grq8\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.475713 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/969a304d-b02f-40b9-b439-9f3f5b88ccfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.768550 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" event={"ID":"969a304d-b02f-40b9-b439-9f3f5b88ccfa","Type":"ContainerDied","Data":"9b010aec3dd4bdbe6aad29d1ac3dd99a9876c86815c99c09402282e05b400799"} Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.768587 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b010aec3dd4bdbe6aad29d1ac3dd99a9876c86815c99c09402282e05b400799" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.768641 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher17d4-account-delete-8vn6w" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.773188 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2bb37bcc-61c1-4154-8ee5-991a34693b5d","Type":"ContainerDied","Data":"078cf8901e23cca7210ddf1d2f934fd11bae0a827f8594fb785a1b8e7011bda9"} Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.773232 4995 scope.go:117] "RemoveContainer" containerID="a5ca23775cbc61e8524b6d4c2f483e44d643eeb7b9bf384b73ca503fe95aa044" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.773194 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776517 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerStarted","Data":"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9"} Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776670 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" containerID="cri-o://13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776886 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776930 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" containerID="cri-o://1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776967 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" containerID="cri-o://21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.776996 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" containerID="cri-o://4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" gracePeriod=30 Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.813610 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.80277482 podStartE2EDuration="5.813592041s" podCreationTimestamp="2026-01-26 23:29:46 +0000 UTC" firstStartedPulling="2026-01-26 23:29:47.91074641 +0000 UTC m=+1292.075453875" lastFinishedPulling="2026-01-26 23:29:50.921563631 +0000 UTC m=+1295.086271096" observedRunningTime="2026-01-26 23:29:51.811038667 +0000 UTC m=+1295.975746132" watchObservedRunningTime="2026-01-26 23:29:51.813592041 +0000 UTC m=+1295.978299506" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.856575 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.863957 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.949363 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": read tcp 10.217.0.2:48712->10.217.0.144:9322: read: connection reset by peer" Jan 26 23:29:51 crc kubenswrapper[4995]: I0126 23:29:51.949887 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" probeResult="failure" output="Get \"https://10.217.0.144:9322/\": dial tcp 10.217.0.144:9322: connect: connection refused" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.154460 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.291929 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292001 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292054 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292147 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.292243 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") pod \"dc0299c2-2a71-4542-bc23-10e088bfec0d\" (UID: \"dc0299c2-2a71-4542-bc23-10e088bfec0d\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.296498 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs" (OuterVolumeSpecName: "logs") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.308373 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668" (OuterVolumeSpecName: "kube-api-access-jv668") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "kube-api-access-jv668". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.381322 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.393550 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.393583 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc0299c2-2a71-4542-bc23-10e088bfec0d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.393594 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv668\" (UniqueName: \"kubernetes.io/projected/dc0299c2-2a71-4542-bc23-10e088bfec0d-kube-api-access-jv668\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.398882 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.462254 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data" (OuterVolumeSpecName: "config-data") pod "dc0299c2-2a71-4542-bc23-10e088bfec0d" (UID: "dc0299c2-2a71-4542-bc23-10e088bfec0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.495478 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.495829 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dc0299c2-2a71-4542-bc23-10e088bfec0d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.507902 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.527860 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" path="/var/lib/kubelet/pods/2bb37bcc-61c1-4154-8ee5-991a34693b5d/volumes" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.605821 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.621833 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-6ch9m"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.647173 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.655710 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.666839 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-17d4-account-create-update-dj9g9"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.673209 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher17d4-account-delete-8vn6w"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702409 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702455 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702596 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702618 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702642 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702713 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.702733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") pod \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\" (UID: \"08c10a90-cf36-46d8-9d0a-8152c08eccf9\") " Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.705624 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs" (OuterVolumeSpecName: "logs") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.725949 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25" (OuterVolumeSpecName: "kube-api-access-jbx25") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "kube-api-access-jbx25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.726086 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.727468 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.743832 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data" (OuterVolumeSpecName: "config-data") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.744921 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.763656 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "08c10a90-cf36-46d8-9d0a-8152c08eccf9" (UID: "08c10a90-cf36-46d8-9d0a-8152c08eccf9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792839 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792875 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" exitCode=2 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792883 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792921 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792953 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.792963 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794571 4995 generic.go:334] "Generic (PLEG): container finished" podID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794646 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794674 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerDied","Data":"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794730 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"dc0299c2-2a71-4542-bc23-10e088bfec0d","Type":"ContainerDied","Data":"dbdc29f4e59ea5432ac8acd8aac8655730d2e92783170e75a0e2ef756183ec9c"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.794746 4995 scope.go:117] "RemoveContainer" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796315 4995 generic.go:334] "Generic (PLEG): container finished" podID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" exitCode=0 Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796349 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerDied","Data":"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796405 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"08c10a90-cf36-46d8-9d0a-8152c08eccf9","Type":"ContainerDied","Data":"b1ed431fa560523554c77fc1ace70d32844eb1774bdcc487dd885dc3a028b347"} Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.796539 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.805598 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806422 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806446 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbx25\" (UniqueName: \"kubernetes.io/projected/08c10a90-cf36-46d8-9d0a-8152c08eccf9-kube-api-access-jbx25\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806459 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806471 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806483 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08c10a90-cf36-46d8-9d0a-8152c08eccf9-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.806495 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08c10a90-cf36-46d8-9d0a-8152c08eccf9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.826352 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.829607 4995 scope.go:117] "RemoveContainer" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" Jan 26 23:29:52 crc kubenswrapper[4995]: E0126 23:29:52.830030 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653\": container with ID starting with 29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653 not found: ID does not exist" containerID="29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.830068 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653"} err="failed to get container status \"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653\": rpc error: code = NotFound desc = could not find container \"29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653\": container with ID starting with 29a08ba22f5fb08cc08f1fc9fc42e8bf8da0f628fd6b83a64b32248038e5e653 not found: ID does not exist" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.830093 4995 scope.go:117] "RemoveContainer" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.832606 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.847391 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.853111 4995 scope.go:117] "RemoveContainer" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.854022 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872054 4995 scope.go:117] "RemoveContainer" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" Jan 26 23:29:52 crc kubenswrapper[4995]: E0126 23:29:52.872519 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6\": container with ID starting with 46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6 not found: ID does not exist" containerID="46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872561 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6"} err="failed to get container status \"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6\": rpc error: code = NotFound desc = could not find container \"46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6\": container with ID starting with 46475218abbab37eac8da611fea9f69c764784c4bc06214dc969423f1ab653c6 not found: ID does not exist" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872589 4995 scope.go:117] "RemoveContainer" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" Jan 26 23:29:52 crc kubenswrapper[4995]: E0126 23:29:52.872915 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7\": container with ID starting with 1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7 not found: ID does not exist" containerID="1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7" Jan 26 23:29:52 crc kubenswrapper[4995]: I0126 23:29:52.872944 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7"} err="failed to get container status \"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7\": rpc error: code = NotFound desc = could not find container \"1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7\": container with ID starting with 1a1487c50c1b7d2bc5ae5be1678413b9aa463d8accb5fefd14746361665074f7 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.791340 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809725 4995 generic.go:334] "Generic (PLEG): container finished" podID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" exitCode=0 Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809808 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858"} Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809845 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"5779d6d0-6f61-467c-b521-a16e0201f7ed","Type":"ContainerDied","Data":"15dcd2f1dacb6b1874128ac2c9ca47a1269e26431a781aee91bfcaa8e21c1b83"} Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809850 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.809866 4995 scope.go:117] "RemoveContainer" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.814942 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815267 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815284 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815297 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815304 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815316 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815322 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815336 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815341 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815349 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerName="mariadb-account-delete" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815355 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerName="mariadb-account-delete" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815366 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815373 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815389 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815395 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815409 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815415 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.815426 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815432 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815577 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bb37bcc-61c1-4154-8ee5-991a34693b5d" containerName="watcher-applier" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815585 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-api" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815595 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="proxy-httpd" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815606 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" containerName="watcher-kuttl-api-log" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815615 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" containerName="mariadb-account-delete" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815622 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-notification-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815631 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="sg-core" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815640 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" containerName="ceilometer-central-agent" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.815649 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" containerName="watcher-decision-engine" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.821131 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.825164 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.825250 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.837437 4995 scope.go:117] "RemoveContainer" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.839010 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.869595 4995 scope.go:117] "RemoveContainer" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.907139 4995 scope.go:117] "RemoveContainer" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.924300 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.925606 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926042 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926138 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926165 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926585 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926691 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926790 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926812 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.926865 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") pod \"5779d6d0-6f61-467c-b521-a16e0201f7ed\" (UID: \"5779d6d0-6f61-467c-b521-a16e0201f7ed\") " Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.927318 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.927376 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.928609 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.929543 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.930284 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.931452 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.933347 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54" (OuterVolumeSpecName: "kube-api-access-pkf54") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "kube-api-access-pkf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.951666 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts" (OuterVolumeSpecName: "scripts") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.960740 4995 scope.go:117] "RemoveContainer" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.960840 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"watcher-db-create-8r7vh\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.960928 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.961310 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9\": container with ID starting with 1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9 not found: ID does not exist" containerID="1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961339 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9"} err="failed to get container status \"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9\": rpc error: code = NotFound desc = could not find container \"1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9\": container with ID starting with 1e97c427e4f82ae43f5cd6adb18e5c1dfd8b47603aaf933bff2c839629a061e9 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961359 4995 scope.go:117] "RemoveContainer" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.961726 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e\": container with ID starting with 21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e not found: ID does not exist" containerID="21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961778 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e"} err="failed to get container status \"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e\": rpc error: code = NotFound desc = could not find container \"21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e\": container with ID starting with 21ca6d822114b4cd6030ef624203648e7fdb5adcdd66f49dfa06520764b3403e not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.961808 4995 scope.go:117] "RemoveContainer" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.968860 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8\": container with ID starting with 4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8 not found: ID does not exist" containerID="4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.968902 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8"} err="failed to get container status \"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8\": rpc error: code = NotFound desc = could not find container \"4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8\": container with ID starting with 4847083f653cceb9f57d782d1a226f1f046c12ca3fa1cb2434ecc86d72f656b8 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.968926 4995 scope.go:117] "RemoveContainer" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" Jan 26 23:29:53 crc kubenswrapper[4995]: E0126 23:29:53.969790 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858\": container with ID starting with 13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858 not found: ID does not exist" containerID="13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.969896 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858"} err="failed to get container status \"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858\": rpc error: code = NotFound desc = could not find container \"13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858\": container with ID starting with 13c471ddae9483cb048d50c806f5854de723afd6ddaf1cbbb9b2aca2a4419858 not found: ID does not exist" Jan 26 23:29:53 crc kubenswrapper[4995]: I0126 23:29:53.981542 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.015992 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.029941 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030022 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030074 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030086 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030096 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030118 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5779d6d0-6f61-467c-b521-a16e0201f7ed-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030126 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.030135 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkf54\" (UniqueName: \"kubernetes.io/projected/5779d6d0-6f61-467c-b521-a16e0201f7ed-kube-api-access-pkf54\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.063863 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.070160 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data" (OuterVolumeSpecName: "config-data") pod "5779d6d0-6f61-467c-b521-a16e0201f7ed" (UID: "5779d6d0-6f61-467c-b521-a16e0201f7ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131513 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131712 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131792 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.131809 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5779d6d0-6f61-467c-b521-a16e0201f7ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.132415 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.139522 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.142785 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.151728 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.159784 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"watcher-26de-account-create-update-h8699\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.180939 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.183216 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.186227 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.186499 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.189986 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.191143 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234468 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234508 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234580 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234626 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234699 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234726 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.234754 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.338661 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339030 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339076 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339094 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339153 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339185 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339245 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.339276 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.340081 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.341273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.344696 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.344875 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.346708 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.348827 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.349084 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.357477 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"ceilometer-0\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.421345 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.503047 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.560893 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c10a90-cf36-46d8-9d0a-8152c08eccf9" path="/var/lib/kubelet/pods/08c10a90-cf36-46d8-9d0a-8152c08eccf9/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.561971 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5779d6d0-6f61-467c-b521-a16e0201f7ed" path="/var/lib/kubelet/pods/5779d6d0-6f61-467c-b521-a16e0201f7ed/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.562909 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="969a304d-b02f-40b9-b439-9f3f5b88ccfa" path="/var/lib/kubelet/pods/969a304d-b02f-40b9-b439-9f3f5b88ccfa/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.563892 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0710d60-452a-4ffb-80e7-cf4b95c4b93c" path="/var/lib/kubelet/pods/c0710d60-452a-4ffb-80e7-cf4b95c4b93c/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.564464 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db61ff94-84e4-46ff-affd-1d1fd691a219" path="/var/lib/kubelet/pods/db61ff94-84e4-46ff-affd-1d1fd691a219/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.565056 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0299c2-2a71-4542-bc23-10e088bfec0d" path="/var/lib/kubelet/pods/dc0299c2-2a71-4542-bc23-10e088bfec0d/volumes" Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.676690 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.822661 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8r7vh" event={"ID":"360b1483-8046-4c4c-920d-69387e2fbbed","Type":"ContainerStarted","Data":"833c638770cf9d272616427da3157be1474f20a0ade1400ac79962a7b73c6e8e"} Jan 26 23:29:54 crc kubenswrapper[4995]: I0126 23:29:54.953839 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.060412 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:29:55 crc kubenswrapper[4995]: W0126 23:29:55.062802 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f8b520e_94ee_43d6_bd95_d3b1b0a10649.slice/crio-aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f WatchSource:0}: Error finding container aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f: Status 404 returned error can't find the container with id aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.834011 4995 generic.go:334] "Generic (PLEG): container finished" podID="360b1483-8046-4c4c-920d-69387e2fbbed" containerID="b04176a0e27de47ec9992ca7aa97e0c6c4c8aae35383f6b313a755fda54d8e47" exitCode=0 Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.834469 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8r7vh" event={"ID":"360b1483-8046-4c4c-920d-69387e2fbbed","Type":"ContainerDied","Data":"b04176a0e27de47ec9992ca7aa97e0c6c4c8aae35383f6b313a755fda54d8e47"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.836804 4995 generic.go:334] "Generic (PLEG): container finished" podID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerID="9bcf59f8068a58a5908f7f9f490fcde236bda08e654b64f1d471d1bef1b45cfc" exitCode=0 Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.836846 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" event={"ID":"ab224b66-6f5e-4e78-bdc4-e913dcb2250a","Type":"ContainerDied","Data":"9bcf59f8068a58a5908f7f9f490fcde236bda08e654b64f1d471d1bef1b45cfc"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.836867 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" event={"ID":"ab224b66-6f5e-4e78-bdc4-e913dcb2250a","Type":"ContainerStarted","Data":"a623d35a183514c4477dab94518c449d6888d68791e57b1f6029091ee004575f"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.838680 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320"} Jan 26 23:29:55 crc kubenswrapper[4995]: I0126 23:29:55.838712 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f"} Jan 26 23:29:56 crc kubenswrapper[4995]: I0126 23:29:56.861078 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72"} Jan 26 23:29:56 crc kubenswrapper[4995]: I0126 23:29:56.861758 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5"} Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.300200 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.313835 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") pod \"360b1483-8046-4c4c-920d-69387e2fbbed\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.314324 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") pod \"360b1483-8046-4c4c-920d-69387e2fbbed\" (UID: \"360b1483-8046-4c4c-920d-69387e2fbbed\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.315611 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "360b1483-8046-4c4c-920d-69387e2fbbed" (UID: "360b1483-8046-4c4c-920d-69387e2fbbed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.325868 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj" (OuterVolumeSpecName: "kube-api-access-v89tj") pod "360b1483-8046-4c4c-920d-69387e2fbbed" (UID: "360b1483-8046-4c4c-920d-69387e2fbbed"). InnerVolumeSpecName "kube-api-access-v89tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.376078 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415284 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") pod \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415474 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") pod \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\" (UID: \"ab224b66-6f5e-4e78-bdc4-e913dcb2250a\") " Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415851 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v89tj\" (UniqueName: \"kubernetes.io/projected/360b1483-8046-4c4c-920d-69387e2fbbed-kube-api-access-v89tj\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.415873 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/360b1483-8046-4c4c-920d-69387e2fbbed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.418050 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab224b66-6f5e-4e78-bdc4-e913dcb2250a" (UID: "ab224b66-6f5e-4e78-bdc4-e913dcb2250a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.420218 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c" (OuterVolumeSpecName: "kube-api-access-bvm5c") pod "ab224b66-6f5e-4e78-bdc4-e913dcb2250a" (UID: "ab224b66-6f5e-4e78-bdc4-e913dcb2250a"). InnerVolumeSpecName "kube-api-access-bvm5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.517143 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.517183 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvm5c\" (UniqueName: \"kubernetes.io/projected/ab224b66-6f5e-4e78-bdc4-e913dcb2250a-kube-api-access-bvm5c\") on node \"crc\" DevicePath \"\"" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.873033 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8r7vh" event={"ID":"360b1483-8046-4c4c-920d-69387e2fbbed","Type":"ContainerDied","Data":"833c638770cf9d272616427da3157be1474f20a0ade1400ac79962a7b73c6e8e"} Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.873079 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="833c638770cf9d272616427da3157be1474f20a0ade1400ac79962a7b73c6e8e" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.873183 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8r7vh" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.887197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" event={"ID":"ab224b66-6f5e-4e78-bdc4-e913dcb2250a","Type":"ContainerDied","Data":"a623d35a183514c4477dab94518c449d6888d68791e57b1f6029091ee004575f"} Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.887239 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a623d35a183514c4477dab94518c449d6888d68791e57b1f6029091ee004575f" Jan 26 23:29:57 crc kubenswrapper[4995]: I0126 23:29:57.887238 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-26de-account-create-update-h8699" Jan 26 23:29:58 crc kubenswrapper[4995]: I0126 23:29:58.896651 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerStarted","Data":"41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825"} Jan 26 23:29:58 crc kubenswrapper[4995]: I0126 23:29:58.896949 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:29:58 crc kubenswrapper[4995]: I0126 23:29:58.921974 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.197292262 podStartE2EDuration="4.921955346s" podCreationTimestamp="2026-01-26 23:29:54 +0000 UTC" firstStartedPulling="2026-01-26 23:29:55.064973802 +0000 UTC m=+1299.229681267" lastFinishedPulling="2026-01-26 23:29:57.789636886 +0000 UTC m=+1301.954344351" observedRunningTime="2026-01-26 23:29:58.919188396 +0000 UTC m=+1303.083895861" watchObservedRunningTime="2026-01-26 23:29:58.921955346 +0000 UTC m=+1303.086662831" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.140436 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:29:59 crc kubenswrapper[4995]: E0126 23:29:59.140865 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" containerName="mariadb-database-create" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.140891 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" containerName="mariadb-database-create" Jan 26 23:29:59 crc kubenswrapper[4995]: E0126 23:29:59.140904 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerName="mariadb-account-create-update" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.140916 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerName="mariadb-account-create-update" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.141258 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" containerName="mariadb-account-create-update" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.141298 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" containerName="mariadb-database-create" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.142358 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.146264 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-h5tln" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.146531 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.149809 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251555 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251631 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.251924 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.352991 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.353071 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.353121 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.353142 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.358490 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.358701 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.358988 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.373989 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"watcher-kuttl-db-sync-5tdfh\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:29:59 crc kubenswrapper[4995]: I0126 23:29:59.460390 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.059656 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:30:00 crc kubenswrapper[4995]: W0126 23:30:00.063730 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74804f16_0037_44f0_a6a5_71414a33cee2.slice/crio-b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede WatchSource:0}: Error finding container b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede: Status 404 returned error can't find the container with id b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.132555 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p"] Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.134051 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.137879 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.138237 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.142569 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p"] Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.166147 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.166204 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.166272 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.267284 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.267344 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.267388 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.268272 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.272284 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.287879 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"collect-profiles-29491170-xm57p\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.485332 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.913873 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerStarted","Data":"5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279"} Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.914114 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerStarted","Data":"b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede"} Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.932727 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" podStartSLOduration=1.9327120089999998 podStartE2EDuration="1.932712009s" podCreationTimestamp="2026-01-26 23:29:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:00.929240833 +0000 UTC m=+1305.093948318" watchObservedRunningTime="2026-01-26 23:30:00.932712009 +0000 UTC m=+1305.097419474" Jan 26 23:30:00 crc kubenswrapper[4995]: I0126 23:30:00.993835 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p"] Jan 26 23:30:01 crc kubenswrapper[4995]: W0126 23:30:01.002642 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54656312_1776_448a_aed7_759b65eb3763.slice/crio-301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14 WatchSource:0}: Error finding container 301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14: Status 404 returned error can't find the container with id 301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14 Jan 26 23:30:01 crc kubenswrapper[4995]: I0126 23:30:01.921890 4995 generic.go:334] "Generic (PLEG): container finished" podID="54656312-1776-448a-aed7-759b65eb3763" containerID="742b3454037bbd44149a6c25e12eb0286e362e0941f889c7b4b09e41324862da" exitCode=0 Jan 26 23:30:01 crc kubenswrapper[4995]: I0126 23:30:01.922012 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" event={"ID":"54656312-1776-448a-aed7-759b65eb3763","Type":"ContainerDied","Data":"742b3454037bbd44149a6c25e12eb0286e362e0941f889c7b4b09e41324862da"} Jan 26 23:30:01 crc kubenswrapper[4995]: I0126 23:30:01.922474 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" event={"ID":"54656312-1776-448a-aed7-759b65eb3763","Type":"ContainerStarted","Data":"301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14"} Jan 26 23:30:02 crc kubenswrapper[4995]: I0126 23:30:02.932399 4995 generic.go:334] "Generic (PLEG): container finished" podID="74804f16-0037-44f0-a6a5-71414a33cee2" containerID="5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279" exitCode=0 Jan 26 23:30:02 crc kubenswrapper[4995]: I0126 23:30:02.932470 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerDied","Data":"5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279"} Jan 26 23:30:03 crc kubenswrapper[4995]: E0126 23:30:03.231258 4995 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.164:48942->38.102.83.164:42819: write tcp 38.102.83.164:48942->38.102.83.164:42819: write: broken pipe Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.323308 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.425520 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") pod \"54656312-1776-448a-aed7-759b65eb3763\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.425577 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") pod \"54656312-1776-448a-aed7-759b65eb3763\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.425676 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") pod \"54656312-1776-448a-aed7-759b65eb3763\" (UID: \"54656312-1776-448a-aed7-759b65eb3763\") " Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.427007 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume" (OuterVolumeSpecName: "config-volume") pod "54656312-1776-448a-aed7-759b65eb3763" (UID: "54656312-1776-448a-aed7-759b65eb3763"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.436280 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch" (OuterVolumeSpecName: "kube-api-access-k6nch") pod "54656312-1776-448a-aed7-759b65eb3763" (UID: "54656312-1776-448a-aed7-759b65eb3763"). InnerVolumeSpecName "kube-api-access-k6nch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.442367 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54656312-1776-448a-aed7-759b65eb3763" (UID: "54656312-1776-448a-aed7-759b65eb3763"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.538308 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54656312-1776-448a-aed7-759b65eb3763-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.538347 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6nch\" (UniqueName: \"kubernetes.io/projected/54656312-1776-448a-aed7-759b65eb3763-kube-api-access-k6nch\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.538357 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54656312-1776-448a-aed7-759b65eb3763-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.943648 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.943644 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491170-xm57p" event={"ID":"54656312-1776-448a-aed7-759b65eb3763","Type":"ContainerDied","Data":"301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14"} Jan 26 23:30:03 crc kubenswrapper[4995]: I0126 23:30:03.944147 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="301075a855e97b7d9c2bf4ca3127150a885502ea1afa7b11b0616dc5c03d6d14" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.313343 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353611 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353682 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353873 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.353998 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") pod \"74804f16-0037-44f0-a6a5-71414a33cee2\" (UID: \"74804f16-0037-44f0-a6a5-71414a33cee2\") " Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.382855 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2" (OuterVolumeSpecName: "kube-api-access-xxbf2") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "kube-api-access-xxbf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.383166 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.414677 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data" (OuterVolumeSpecName: "config-data") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.416965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74804f16-0037-44f0-a6a5-71414a33cee2" (UID: "74804f16-0037-44f0-a6a5-71414a33cee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456275 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxbf2\" (UniqueName: \"kubernetes.io/projected/74804f16-0037-44f0-a6a5-71414a33cee2-kube-api-access-xxbf2\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456304 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456315 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.456325 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74804f16-0037-44f0-a6a5-71414a33cee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.972816 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" event={"ID":"74804f16-0037-44f0-a6a5-71414a33cee2","Type":"ContainerDied","Data":"b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede"} Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.973076 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b50b4a30f6f75cc6a1d277a120e89c5beece2b2c0b19beb2d56bbdbcdd7beede" Jan 26 23:30:04 crc kubenswrapper[4995]: I0126 23:30:04.973188 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243321 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: E0126 23:30:05.243640 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" containerName="watcher-kuttl-db-sync" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243652 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" containerName="watcher-kuttl-db-sync" Jan 26 23:30:05 crc kubenswrapper[4995]: E0126 23:30:05.243666 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54656312-1776-448a-aed7-759b65eb3763" containerName="collect-profiles" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243672 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="54656312-1776-448a-aed7-759b65eb3763" containerName="collect-profiles" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243824 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" containerName="watcher-kuttl-db-sync" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.243834 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="54656312-1776-448a-aed7-759b65eb3763" containerName="collect-profiles" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.244761 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.262953 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.263037 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-h5tln" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.264040 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.264126 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.269770 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.277155 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.278273 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.289875 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.306295 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.355075 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.356401 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.358772 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.366470 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374402 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374441 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374502 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374520 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374891 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374945 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.374976 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476249 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476315 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476382 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476408 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476461 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476498 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476531 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476557 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476595 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476619 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476654 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476689 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476724 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476749 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476788 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.476823 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.477196 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.480594 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.485741 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.485889 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.485951 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.486758 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.498124 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"watcher-kuttl-api-0\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.577728 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.578819 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.578866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.578912 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579036 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579506 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579593 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579675 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.579874 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.580137 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.581711 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.582931 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.582995 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.583770 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.584251 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.589531 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.611919 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"watcher-kuttl-applier-0\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.613187 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.674184 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:05 crc kubenswrapper[4995]: I0126 23:30:05.905384 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:06 crc kubenswrapper[4995]: I0126 23:30:06.075488 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:06 crc kubenswrapper[4995]: I0126 23:30:06.176626 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:06 crc kubenswrapper[4995]: I0126 23:30:06.365895 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:06 crc kubenswrapper[4995]: W0126 23:30:06.381287 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19b6df5_abba_4eeb_9103_ac018449be94.slice/crio-57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620 WatchSource:0}: Error finding container 57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620: Status 404 returned error can't find the container with id 57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620 Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000028 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerStarted","Data":"d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000425 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerStarted","Data":"b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000445 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerStarted","Data":"54980638d9727bc6af52a006a8f0a0d24420ad4add393daec95332d4aba13d66"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.000469 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.002322 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerStarted","Data":"3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.002406 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerStarted","Data":"cba3ab62d7d62a0d684ffabbf01be7f833800d0b23faa4d8fc8f45160ef60210"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.004433 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerStarted","Data":"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.005069 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerStarted","Data":"57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620"} Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.034828 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.034810155 podStartE2EDuration="2.034810155s" podCreationTimestamp="2026-01-26 23:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:07.028127328 +0000 UTC m=+1311.192834793" watchObservedRunningTime="2026-01-26 23:30:07.034810155 +0000 UTC m=+1311.199517610" Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.048564 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.048547778 podStartE2EDuration="2.048547778s" podCreationTimestamp="2026-01-26 23:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:07.048130408 +0000 UTC m=+1311.212837873" watchObservedRunningTime="2026-01-26 23:30:07.048547778 +0000 UTC m=+1311.213255243" Jan 26 23:30:07 crc kubenswrapper[4995]: I0126 23:30:07.069262 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.069243536 podStartE2EDuration="2.069243536s" podCreationTimestamp="2026-01-26 23:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:07.065938303 +0000 UTC m=+1311.230645758" watchObservedRunningTime="2026-01-26 23:30:07.069243536 +0000 UTC m=+1311.233951001" Jan 26 23:30:09 crc kubenswrapper[4995]: I0126 23:30:09.562376 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:10 crc kubenswrapper[4995]: I0126 23:30:10.590355 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:10 crc kubenswrapper[4995]: I0126 23:30:10.907225 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.590858 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.608595 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.675368 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.707252 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.907262 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:15 crc kubenswrapper[4995]: I0126 23:30:15.955767 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.087127 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.094872 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.116199 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:16 crc kubenswrapper[4995]: I0126 23:30:16.122594 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.259374 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.259934 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" containerID="cri-o://8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.260696 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" containerID="cri-o://41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.260977 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" containerID="cri-o://f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.261021 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" containerID="cri-o://94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5" gracePeriod=30 Jan 26 23:30:18 crc kubenswrapper[4995]: I0126 23:30:18.276596 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.149:3000/\": EOF" Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112481 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825" exitCode=0 Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112542 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72" exitCode=2 Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112563 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320" exitCode=0 Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825"} Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112642 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72"} Jan 26 23:30:19 crc kubenswrapper[4995]: I0126 23:30:19.112669 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320"} Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.162352 4995 generic.go:334] "Generic (PLEG): container finished" podID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerID="94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5" exitCode=0 Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.162594 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5"} Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.293674 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412560 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412678 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412752 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412785 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412810 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412831 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412872 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.412941 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") pod \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\" (UID: \"7f8b520e-94ee-43d6-bd95-d3b1b0a10649\") " Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.413465 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.414268 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.418870 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq" (OuterVolumeSpecName: "kube-api-access-9n7pq") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "kube-api-access-9n7pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.421464 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts" (OuterVolumeSpecName: "scripts") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.446955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.475427 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.482182 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515038 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515087 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n7pq\" (UniqueName: \"kubernetes.io/projected/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-kube-api-access-9n7pq\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515126 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515142 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515159 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515171 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.515182 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.529296 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data" (OuterVolumeSpecName: "config-data") pod "7f8b520e-94ee-43d6-bd95-d3b1b0a10649" (UID: "7f8b520e-94ee-43d6-bd95-d3b1b0a10649"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:23 crc kubenswrapper[4995]: I0126 23:30:23.617596 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f8b520e-94ee-43d6-bd95-d3b1b0a10649-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.172990 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"7f8b520e-94ee-43d6-bd95-d3b1b0a10649","Type":"ContainerDied","Data":"aa67959476738862834dc8998fdfc7da48cfa14012e478d6d42b65aed2aa482f"} Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.173069 4995 scope.go:117] "RemoveContainer" containerID="41e65b4db8702b530f563d695c4ed0a469a72700beb73c508fc925f625247825" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.174152 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.193305 4995 scope.go:117] "RemoveContainer" containerID="f115c8acd4047a269367680cb5e5077d9449d56ed4326ed7a82693f8a1db6b72" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.215699 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.224915 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.228650 4995 scope.go:117] "RemoveContainer" containerID="94d9d8bc5f94e5baf7ccac973e0ed26921a007783ddea5f0a6c09cd10d4ddfd5" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.235847 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236185 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236201 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236212 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236219 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236231 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236237 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: E0126 23:30:24.236258 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236264 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236403 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="proxy-httpd" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236413 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-notification-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236423 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="sg-core" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.236435 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" containerName="ceilometer-central-agent" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.238574 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.243491 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.243807 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.244010 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.251120 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.269647 4995 scope.go:117] "RemoveContainer" containerID="8d2bd0f5b7597157a9cb981c13d45c9442331cbe46c3e93f20bf03bd3f8e6320" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429488 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429599 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429678 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429711 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429728 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429745 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.429779 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.530620 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531015 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531048 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531726 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531796 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531882 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531902 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531921 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.531975 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.533199 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.535661 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.535866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.536202 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f8b520e-94ee-43d6-bd95-d3b1b0a10649" path="/var/lib/kubelet/pods/7f8b520e-94ee-43d6-bd95-d3b1b0a10649/volumes" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.536539 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.537325 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.553218 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.555921 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"ceilometer-0\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:24 crc kubenswrapper[4995]: I0126 23:30:24.572546 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.083738 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.184746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"c97dd3f359f663362140f94ede7f9243c229adb056421298fec32584624f10b0"} Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.998740 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:25 crc kubenswrapper[4995]: I0126 23:30:25.998951 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/memcached-0" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" containerID="cri-o://3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.065627 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.065841 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" containerID="cri-o://9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.098026 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.098290 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" containerID="cri-o://3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.109051 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.109339 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" containerID="cri-o://b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.109380 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" containerID="cri-o://d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7" gracePeriod=30 Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.168265 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.174641 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-w6lw7"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.193332 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.232433 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.233458 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.235975 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"osp-secret" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.236300 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-mtls" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.243159 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.362968 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363569 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363627 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363663 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363707 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363762 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.363816 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466222 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466323 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466355 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466386 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466421 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466452 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.466540 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.471892 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.472401 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.473436 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.475529 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.489716 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.490830 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.493744 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"keystone-bootstrap-sf9jb\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.529906 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="049184a2-2d7f-4107-8a72-197fede36e5b" path="/var/lib/kubelet/pods/049184a2-2d7f-4107-8a72-197fede36e5b/volumes" Jan 26 23:30:26 crc kubenswrapper[4995]: I0126 23:30:26.582086 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:27 crc kubenswrapper[4995]: E0126 23:30:27.086416 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode035ba66_a2ec_4127_a799_bb9dd2d07e2f.slice/crio-conmon-d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.209032 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.248446 4995 generic.go:334] "Generic (PLEG): container finished" podID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerID="d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7" exitCode=0 Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.248797 4995 generic.go:334] "Generic (PLEG): container finished" podID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerID="b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846" exitCode=143 Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.249224 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerDied","Data":"d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7"} Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.249286 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerDied","Data":"b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846"} Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.255352 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.507348 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589455 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589717 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589770 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589798 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589825 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589878 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.589906 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") pod \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\" (UID: \"e035ba66-a2ec-4127-a799-bb9dd2d07e2f\") " Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.591369 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs" (OuterVolumeSpecName: "logs") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.610361 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm" (OuterVolumeSpecName: "kube-api-access-7n7fm") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "kube-api-access-7n7fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.627326 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.633237 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.646325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.661288 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691517 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691556 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691567 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n7fm\" (UniqueName: \"kubernetes.io/projected/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-kube-api-access-7n7fm\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691576 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691584 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.691593 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.697224 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data" (OuterVolumeSpecName: "config-data") pod "e035ba66-a2ec-4127-a799-bb9dd2d07e2f" (UID: "e035ba66-a2ec-4127-a799-bb9dd2d07e2f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.792860 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e035ba66-a2ec-4127-a799-bb9dd2d07e2f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:27 crc kubenswrapper[4995]: I0126 23:30:27.824740 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031233 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031345 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.031512 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") pod \"a19b6df5-abba-4eeb-9103-ac018449be94\" (UID: \"a19b6df5-abba-4eeb-9103-ac018449be94\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.032019 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs" (OuterVolumeSpecName: "logs") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.037547 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw" (OuterVolumeSpecName: "kube-api-access-nxgsw") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "kube-api-access-nxgsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.072052 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data" (OuterVolumeSpecName: "config-data") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.074505 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a19b6df5-abba-4eeb-9103-ac018449be94" (UID: "a19b6df5-abba-4eeb-9103-ac018449be94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132788 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxgsw\" (UniqueName: \"kubernetes.io/projected/a19b6df5-abba-4eeb-9103-ac018449be94-kube-api-access-nxgsw\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132818 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132828 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a19b6df5-abba-4eeb-9103-ac018449be94-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.132837 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19b6df5-abba-4eeb-9103-ac018449be94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.294827 4995 generic.go:334] "Generic (PLEG): container finished" podID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerID="3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd" exitCode=0 Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.295000 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerDied","Data":"3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.302415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"e035ba66-a2ec-4127-a799-bb9dd2d07e2f","Type":"ContainerDied","Data":"54980638d9727bc6af52a006a8f0a0d24420ad4add393daec95332d4aba13d66"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.302457 4995 scope.go:117] "RemoveContainer" containerID="d9c94c5ab51cf39db5bd5239323a38c0c83e1c1237247e92b84f13365da920b7" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.302574 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.312989 4995 generic.go:334] "Generic (PLEG): container finished" podID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerID="3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999" exitCode=0 Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.313060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerDied","Data":"3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335807 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19b6df5-abba-4eeb-9103-ac018449be94" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" exitCode=0 Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335883 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerDied","Data":"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335883 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.335907 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"a19b6df5-abba-4eeb-9103-ac018449be94","Type":"ContainerDied","Data":"57c851c6087377395317ac025b2f640b05445a770811145b3bf8fc60a87a2620"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.346906 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerStarted","Data":"27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.346949 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerStarted","Data":"9c6d8281bea2660095c708e79d4acf2f75f4040b4b00019316a9d2a2e7c295bd"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.348890 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.373060 4995 scope.go:117] "RemoveContainer" containerID="b81eb9321e4696c7a5dc2b9010299843c0050f48570fe2196a764234a9455846" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.380153 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.402462 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.422614 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.422989 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423003 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.423023 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423031 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.423063 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423071 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423309 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" containerName="watcher-applier" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423325 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-kuttl-api-log" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423337 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" containerName="watcher-api" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.423350 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" podStartSLOduration=2.423331514 podStartE2EDuration="2.423331514s" podCreationTimestamp="2026-01-26 23:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:28.383507688 +0000 UTC m=+1332.548215153" watchObservedRunningTime="2026-01-26 23:30:28.423331514 +0000 UTC m=+1332.588038979" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.426855 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.433735 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.434018 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-public-svc" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.439139 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-watcher-internal-svc" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.442711 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.449733 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.460629 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.468048 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.471453 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.473613 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475816 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475866 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475902 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.475978 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476005 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476145 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476180 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.476350 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.503255 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.549597 4995 scope.go:117] "RemoveContainer" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.551429 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19b6df5-abba-4eeb-9103-ac018449be94" path="/var/lib/kubelet/pods/a19b6df5-abba-4eeb-9103-ac018449be94/volumes" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.552013 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e035ba66-a2ec-4127-a799-bb9dd2d07e2f" path="/var/lib/kubelet/pods/e035ba66-a2ec-4127-a799-bb9dd2d07e2f/volumes" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.564182 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.568307 4995 scope.go:117] "RemoveContainer" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" Jan 26 23:30:28 crc kubenswrapper[4995]: E0126 23:30:28.568763 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58\": container with ID starting with 9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58 not found: ID does not exist" containerID="9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.568794 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58"} err="failed to get container status \"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58\": rpc error: code = NotFound desc = could not find container \"9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58\": container with ID starting with 9be294cde7514d6d117d409e0ae54172c09683b3553529022bd38c3a5cc85a58 not found: ID does not exist" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.569927 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580863 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580925 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580950 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.580982 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581002 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581037 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581064 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581115 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581138 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581157 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581193 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581249 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.581264 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.582703 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.589816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.609780 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.611112 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.617224 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.619281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.621655 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.622283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.681990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682041 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682092 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682152 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682189 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682225 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682244 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682280 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") pod \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\" (UID: \"37ec7b7e-84e8-4a58-b676-c06ed9a0809e\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682325 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682376 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") pod \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\" (UID: \"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991\") " Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682570 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682598 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682657 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682697 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.682731 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.683045 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.686034 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.686254 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs" (OuterVolumeSpecName: "logs") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.686505 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data" (OuterVolumeSpecName: "config-data") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.688205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.689622 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.691718 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg" (OuterVolumeSpecName: "kube-api-access-2qjbg") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "kube-api-access-2qjbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.692383 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2" (OuterVolumeSpecName: "kube-api-access-bhnc2") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "kube-api-access-bhnc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.697686 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.705490 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"watcher-kuttl-applier-0\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.707351 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.716434 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.717704 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.725665 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "37ec7b7e-84e8-4a58-b676-c06ed9a0809e" (UID: "37ec7b7e-84e8-4a58-b676-c06ed9a0809e"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.731069 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data" (OuterVolumeSpecName: "config-data") pod "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" (UID: "8cc7f7fc-dd9c-455a-98bd-191bcb4c9991"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784174 4995 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784208 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784218 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784226 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784233 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784243 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhnc2\" (UniqueName: \"kubernetes.io/projected/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-kube-api-access-bhnc2\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784252 4995 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784261 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qjbg\" (UniqueName: \"kubernetes.io/projected/37ec7b7e-84e8-4a58-b676-c06ed9a0809e-kube-api-access-2qjbg\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784271 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.784281 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.860459 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:28 crc kubenswrapper[4995]: I0126 23:30:28.924887 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:29 crc kubenswrapper[4995]: W0126 23:30:29.150765 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca013d92_6492_419e_b3c4_cfd440daa2bb.slice/crio-e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c WatchSource:0}: Error finding container e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c: Status 404 returned error can't find the container with id e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.152955 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.359780 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.359772 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"8cc7f7fc-dd9c-455a-98bd-191bcb4c9991","Type":"ContainerDied","Data":"cba3ab62d7d62a0d684ffabbf01be7f833800d0b23faa4d8fc8f45160ef60210"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.360340 4995 scope.go:117] "RemoveContainer" containerID="3284f8c951b3e7130a4783b7d13c32061d2e7016da9e1aeeb19449a9e7dee999" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.361266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerStarted","Data":"e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.365821 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerStarted","Data":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.366616 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.370048 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.373313 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"37ec7b7e-84e8-4a58-b676-c06ed9a0809e","Type":"ContainerDied","Data":"1fe63fca4fd6cb5199a750cf9e863e7fdd11939b8e0ee09e81633ccef9bdd3c7"} Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.376796 4995 scope.go:117] "RemoveContainer" containerID="3e04e760b0c77644e191bf4781347a5b2f4ffde2d098dc88a856836722be3efd" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.404570 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.667497953 podStartE2EDuration="5.404550367s" podCreationTimestamp="2026-01-26 23:30:24 +0000 UTC" firstStartedPulling="2026-01-26 23:30:25.095902211 +0000 UTC m=+1329.260609676" lastFinishedPulling="2026-01-26 23:30:28.832954635 +0000 UTC m=+1332.997662090" observedRunningTime="2026-01-26 23:30:29.402407813 +0000 UTC m=+1333.567115268" watchObservedRunningTime="2026-01-26 23:30:29.404550367 +0000 UTC m=+1333.569257842" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.442992 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.474594 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.500803 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.524629 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: E0126 23:30:29.529394 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529469 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" Jan 26 23:30:29 crc kubenswrapper[4995]: E0126 23:30:29.529524 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529533 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529735 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" containerName="memcached" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.529755 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" containerName="watcher-decision-engine" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559528 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559569 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559582 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.559656 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.560453 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.561623 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.563869 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"memcached-memcached-dockercfg-zzlxj" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.564215 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"watcher-kuttl-default"/"memcached-config-data" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.564454 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.565534 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-memcached-svc" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.575941 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596527 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596686 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.596771 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb5bh\" (UniqueName: \"kubernetes.io/projected/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kube-api-access-bb5bh\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597044 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597136 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597176 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kolla-config\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-config-data\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597245 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597278 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.597303 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698447 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-config-data\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698497 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698535 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698562 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698588 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698604 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698621 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb5bh\" (UniqueName: \"kubernetes.io/projected/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kube-api-access-bb5bh\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698681 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698704 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.698723 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kolla-config\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.699289 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.699434 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kolla-config\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.699502 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-config-data\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.702666 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.702843 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.709267 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.711782 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.711927 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.713015 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.713715 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.715553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb5bh\" (UniqueName: \"kubernetes.io/projected/9e495843-c3b4-4d2e-9c40-b11f0d95b5f9-kube-api-access-bb5bh\") pod \"memcached-0\" (UID: \"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9\") " pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.903010 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:29 crc kubenswrapper[4995]: I0126 23:30:29.913682 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.417165 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/memcached-0"] Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.422513 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerStarted","Data":"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.422573 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerStarted","Data":"c39928c36c8af1a9535983a878c5e72ae844418dbec585db7b98acb4c5ad7317"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.427464 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerStarted","Data":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.427586 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerStarted","Data":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.427640 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.439870 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.439854291 podStartE2EDuration="2.439854291s" podCreationTimestamp="2026-01-26 23:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:30.43821391 +0000 UTC m=+1334.602921375" watchObservedRunningTime="2026-01-26 23:30:30.439854291 +0000 UTC m=+1334.604561756" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.465434 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.46541337 podStartE2EDuration="2.46541337s" podCreationTimestamp="2026-01-26 23:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:30.463612725 +0000 UTC m=+1334.628320190" watchObservedRunningTime="2026-01-26 23:30:30.46541337 +0000 UTC m=+1334.630120855" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.528477 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ec7b7e-84e8-4a58-b676-c06ed9a0809e" path="/var/lib/kubelet/pods/37ec7b7e-84e8-4a58-b676-c06ed9a0809e/volumes" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.530864 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc7f7fc-dd9c-455a-98bd-191bcb4c9991" path="/var/lib/kubelet/pods/8cc7f7fc-dd9c-455a-98bd-191bcb4c9991/volumes" Jan 26 23:30:30 crc kubenswrapper[4995]: I0126 23:30:30.535598 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.438490 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerStarted","Data":"6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.439864 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerStarted","Data":"6780d1fd068d258f993980132b0b6bd2df34b9245721daf2a80227aeaf1d0ca4"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.441190 4995 generic.go:334] "Generic (PLEG): container finished" podID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerID="27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74" exitCode=0 Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.441290 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerDied","Data":"27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.443955 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9","Type":"ContainerStarted","Data":"0954fd65e1cb05e0fa6de2a487d8062942ffb2496c99f5b616fd7a07a90b35c9"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.443994 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.444006 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/memcached-0" event={"ID":"9e495843-c3b4-4d2e-9c40-b11f0d95b5f9","Type":"ContainerStarted","Data":"a4cafba8cb82575ae1601987b63e67d675aee0ec038ba40ab42c92ee946dad4b"} Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.467795 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.467772101 podStartE2EDuration="2.467772101s" podCreationTimestamp="2026-01-26 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:31.458001487 +0000 UTC m=+1335.622708972" watchObservedRunningTime="2026-01-26 23:30:31.467772101 +0000 UTC m=+1335.632479576" Jan 26 23:30:31 crc kubenswrapper[4995]: I0126 23:30:31.508023 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/memcached-0" podStartSLOduration=2.508003597 podStartE2EDuration="2.508003597s" podCreationTimestamp="2026-01-26 23:30:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:31.486381996 +0000 UTC m=+1335.651089461" watchObservedRunningTime="2026-01-26 23:30:31.508003597 +0000 UTC m=+1335.672711072" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.839749 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860225 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860281 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860302 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860338 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.860363 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.870759 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.877271 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.888276 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts" (OuterVolumeSpecName: "scripts") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.892929 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl" (OuterVolumeSpecName: "kube-api-access-k84vl") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "kube-api-access-k84vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.902351 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.910330 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data" (OuterVolumeSpecName: "config-data") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961637 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961680 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") pod \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\" (UID: \"c5595470-f70f-4bc9-9012-b939a6b2fc0f\") " Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961984 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.961997 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k84vl\" (UniqueName: \"kubernetes.io/projected/c5595470-f70f-4bc9-9012-b939a6b2fc0f-kube-api-access-k84vl\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.962005 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.962013 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.962021 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:32 crc kubenswrapper[4995]: I0126 23:30:32.986751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.023417 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "c5595470-f70f-4bc9-9012-b939a6b2fc0f" (UID: "c5595470-f70f-4bc9-9012-b939a6b2fc0f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.063203 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.063246 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5595470-f70f-4bc9-9012-b939a6b2fc0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.475457 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.476189 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-bootstrap-sf9jb" event={"ID":"c5595470-f70f-4bc9-9012-b939a6b2fc0f","Type":"ContainerDied","Data":"9c6d8281bea2660095c708e79d4acf2f75f4040b4b00019316a9d2a2e7c295bd"} Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.476241 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6d8281bea2660095c708e79d4acf2f75f4040b4b00019316a9d2a2e7c295bd" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.861399 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:33 crc kubenswrapper[4995]: I0126 23:30:33.925957 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.862338 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.881438 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.925483 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:38 crc kubenswrapper[4995]: I0126 23:30:38.950279 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.539208 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.553620 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.904015 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.915277 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/memcached-0" Jan 26 23:30:39 crc kubenswrapper[4995]: I0126 23:30:39.929319 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.082864 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/keystone-984bfcd89-8d4rw"] Jan 26 23:30:40 crc kubenswrapper[4995]: E0126 23:30:40.083480 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerName="keystone-bootstrap" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.083598 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerName="keystone-bootstrap" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.083878 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" containerName="keystone-bootstrap" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.084792 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.091290 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-984bfcd89-8d4rw"] Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.283850 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-fernet-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.283920 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-internal-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.283952 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-cert-memcached-mtls\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284037 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-scripts\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284061 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-credential-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284082 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzfx\" (UniqueName: \"kubernetes.io/projected/257ee213-d2fa-4d94-9b26-0c62b5411e44-kube-api-access-4hzfx\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284119 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-public-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284245 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-config-data\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.284308 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-combined-ca-bundle\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386017 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-scripts\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386376 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-credential-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386499 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hzfx\" (UniqueName: \"kubernetes.io/projected/257ee213-d2fa-4d94-9b26-0c62b5411e44-kube-api-access-4hzfx\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-public-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386804 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-config-data\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.386943 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-combined-ca-bundle\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.387147 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-fernet-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.387296 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-internal-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.387461 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-cert-memcached-mtls\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.395441 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-config-data\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.395996 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-fernet-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.397400 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-internal-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.398740 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-public-tls-certs\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.400353 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-scripts\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.400675 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-credential-keys\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.400987 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-cert-memcached-mtls\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.405422 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hzfx\" (UniqueName: \"kubernetes.io/projected/257ee213-d2fa-4d94-9b26-0c62b5411e44-kube-api-access-4hzfx\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.408608 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/257ee213-d2fa-4d94-9b26-0c62b5411e44-combined-ca-bundle\") pod \"keystone-984bfcd89-8d4rw\" (UID: \"257ee213-d2fa-4d94-9b26-0c62b5411e44\") " pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.529494 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.555812 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:30:40 crc kubenswrapper[4995]: I0126 23:30:40.704941 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.173449 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/keystone-984bfcd89-8d4rw"] Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.304607 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.538872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" event={"ID":"257ee213-d2fa-4d94-9b26-0c62b5411e44","Type":"ContainerStarted","Data":"23c730fe870113fb434733985f99e79cb0778e240c1b753c124033eca5e27b4e"} Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539042 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" containerID="cri-o://7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" gracePeriod=30 Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539121 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" containerID="cri-o://7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" gracePeriod=30 Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539290 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.539320 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" event={"ID":"257ee213-d2fa-4d94-9b26-0c62b5411e44","Type":"ContainerStarted","Data":"e1f8e8f179d34f122aa821ddd0d1878723a611ef37641339a791f8ed2e0c069b"} Jan 26 23:30:41 crc kubenswrapper[4995]: I0126 23:30:41.570302 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" podStartSLOduration=1.570280106 podStartE2EDuration="1.570280106s" podCreationTimestamp="2026-01-26 23:30:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:41.559004494 +0000 UTC m=+1345.723711979" watchObservedRunningTime="2026-01-26 23:30:41.570280106 +0000 UTC m=+1345.734987581" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.415784 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520204 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520353 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520394 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520502 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520544 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520725 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.520775 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") pod \"ca013d92-6492-419e-b3c4-cfd440daa2bb\" (UID: \"ca013d92-6492-419e-b3c4-cfd440daa2bb\") " Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.521387 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs" (OuterVolumeSpecName: "logs") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.521850 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ca013d92-6492-419e-b3c4-cfd440daa2bb-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.528394 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x" (OuterVolumeSpecName: "kube-api-access-wl66x") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "kube-api-access-wl66x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.597118 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.602415 4995 generic.go:334] "Generic (PLEG): container finished" podID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" exitCode=0 Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.602464 4995 generic.go:334] "Generic (PLEG): container finished" podID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" exitCode=143 Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.602663 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.603079 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data" (OuterVolumeSpecName: "config-data") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.605585 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.609457 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.620280 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623458 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623659 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623759 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl66x\" (UniqueName: \"kubernetes.io/projected/ca013d92-6492-419e-b3c4-cfd440daa2bb-kube-api-access-wl66x\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623838 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.623925 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.624001 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.631227 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ca013d92-6492-419e-b3c4-cfd440daa2bb" (UID: "ca013d92-6492-419e-b3c4-cfd440daa2bb"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669336 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerDied","Data":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669554 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerDied","Data":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669681 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"ca013d92-6492-419e-b3c4-cfd440daa2bb","Type":"ContainerDied","Data":"e52e933d187713fbdb117d3d9fedf5e70884c027cb040bccef5fbae1f2e8951c"} Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.669610 4995 scope.go:117] "RemoveContainer" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.721886 4995 scope.go:117] "RemoveContainer" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.726371 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ca013d92-6492-419e-b3c4-cfd440daa2bb-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.742643 4995 scope.go:117] "RemoveContainer" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.743094 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": container with ID starting with 7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff not found: ID does not exist" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743156 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} err="failed to get container status \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": rpc error: code = NotFound desc = could not find container \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": container with ID starting with 7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743182 4995 scope.go:117] "RemoveContainer" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.743932 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": container with ID starting with 7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837 not found: ID does not exist" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743956 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} err="failed to get container status \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": rpc error: code = NotFound desc = could not find container \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": container with ID starting with 7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837 not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.743971 4995 scope.go:117] "RemoveContainer" containerID="7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.744178 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff"} err="failed to get container status \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": rpc error: code = NotFound desc = could not find container \"7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff\": container with ID starting with 7c00122fd9bce6835245f05c161622c8ef985a05d9c0d227a1df6ee151d73dff not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.744196 4995 scope.go:117] "RemoveContainer" containerID="7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.744408 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837"} err="failed to get container status \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": rpc error: code = NotFound desc = could not find container \"7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837\": container with ID starting with 7220fc251f9b40bcdd4abd36e8a810c469a75ffe082a63e2f28e9476658c8837 not found: ID does not exist" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.934246 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.943654 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.962462 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.962832 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.962853 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" Jan 26 23:30:42 crc kubenswrapper[4995]: E0126 23:30:42.962870 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.962879 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.963115 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-api" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.963140 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" containerName="watcher-kuttl-api-log" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.964186 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.969975 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:30:42 crc kubenswrapper[4995]: I0126 23:30:42.979396 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.131956 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132172 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132284 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132506 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132589 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.132655 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233614 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233673 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233699 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233748 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.233799 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.234115 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.237803 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.238491 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.248589 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.251502 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.255353 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"watcher-kuttl-api-0\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.285952 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:43 crc kubenswrapper[4995]: I0126 23:30:43.783153 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.527940 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca013d92-6492-419e-b3c4-cfd440daa2bb" path="/var/lib/kubelet/pods/ca013d92-6492-419e-b3c4-cfd440daa2bb/volumes" Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.631705 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerStarted","Data":"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6"} Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.631753 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerStarted","Data":"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1"} Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.631764 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerStarted","Data":"9a7d0124cd7a5360719a3b66cdef998880be62eb0874154eac5904934bc66e9c"} Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.632031 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:44 crc kubenswrapper[4995]: I0126 23:30:44.651433 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.651411371 podStartE2EDuration="2.651411371s" podCreationTimestamp="2026-01-26 23:30:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:30:44.650221191 +0000 UTC m=+1348.814928656" watchObservedRunningTime="2026-01-26 23:30:44.651411371 +0000 UTC m=+1348.816118836" Jan 26 23:30:47 crc kubenswrapper[4995]: I0126 23:30:47.008806 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:48 crc kubenswrapper[4995]: I0126 23:30:48.286226 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:53 crc kubenswrapper[4995]: I0126 23:30:53.286983 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:53 crc kubenswrapper[4995]: I0126 23:30:53.291041 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:53 crc kubenswrapper[4995]: I0126 23:30:53.717715 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:30:54 crc kubenswrapper[4995]: I0126 23:30:54.582624 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:30:59 crc kubenswrapper[4995]: E0126 23:30:59.993015 4995 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.164:36724->38.102.83.164:42819: write tcp 38.102.83.164:36724->38.102.83.164:42819: write: broken pipe Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.156035 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.159608 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.163358 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.248027 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.248122 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.248205 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.349941 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350012 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350067 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350592 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.350783 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.370014 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"redhat-operators-2nqkb\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:09 crc kubenswrapper[4995]: I0126 23:31:09.518847 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.032550 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.879990 4995 generic.go:334] "Generic (PLEG): container finished" podID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" exitCode=0 Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.880040 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce"} Jan 26 23:31:10 crc kubenswrapper[4995]: I0126 23:31:10.880121 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerStarted","Data":"db9b7d03b326053c44e82afb8fe738958f31d4b689aa3b9b6e0d3e7411632e71"} Jan 26 23:31:11 crc kubenswrapper[4995]: I0126 23:31:11.893774 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerStarted","Data":"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae"} Jan 26 23:31:12 crc kubenswrapper[4995]: I0126 23:31:12.279750 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/keystone-984bfcd89-8d4rw" Jan 26 23:31:12 crc kubenswrapper[4995]: I0126 23:31:12.347002 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:31:12 crc kubenswrapper[4995]: I0126 23:31:12.347235 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" containerID="cri-o://d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" gracePeriod=30 Jan 26 23:31:13 crc kubenswrapper[4995]: I0126 23:31:13.912754 4995 generic.go:334] "Generic (PLEG): container finished" podID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" exitCode=0 Jan 26 23:31:13 crc kubenswrapper[4995]: I0126 23:31:13.912814 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae"} Jan 26 23:31:14 crc kubenswrapper[4995]: I0126 23:31:14.923029 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerStarted","Data":"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16"} Jan 26 23:31:14 crc kubenswrapper[4995]: I0126 23:31:14.947650 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2nqkb" podStartSLOduration=2.441530821 podStartE2EDuration="5.947627131s" podCreationTimestamp="2026-01-26 23:31:09 +0000 UTC" firstStartedPulling="2026-01-26 23:31:10.881837467 +0000 UTC m=+1375.046544942" lastFinishedPulling="2026-01-26 23:31:14.387933787 +0000 UTC m=+1378.552641252" observedRunningTime="2026-01-26 23:31:14.940090743 +0000 UTC m=+1379.104798208" watchObservedRunningTime="2026-01-26 23:31:14.947627131 +0000 UTC m=+1379.112334596" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.931052 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941430 4995 generic.go:334] "Generic (PLEG): container finished" podID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" exitCode=0 Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941488 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerDied","Data":"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110"} Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941534 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" event={"ID":"284fb412-d705-4c0a-b11d-74f9074a9b6c","Type":"ContainerDied","Data":"c831199d822b765352d7f3cfddb29be2235d20cab03abeb963d2d581104d23cb"} Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.941554 4995 scope.go:117] "RemoveContainer" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953693 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953747 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953783 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953816 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953854 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953874 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953900 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.953970 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") pod \"284fb412-d705-4c0a-b11d-74f9074a9b6c\" (UID: \"284fb412-d705-4c0a-b11d-74f9074a9b6c\") " Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.992271 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.992422 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts" (OuterVolumeSpecName: "scripts") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:15 crc kubenswrapper[4995]: I0126 23:31:15.992320 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.008381 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9" (OuterVolumeSpecName: "kube-api-access-7w7l9") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "kube-api-access-7w7l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.020749 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data" (OuterVolumeSpecName: "config-data") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.040318 4995 scope.go:117] "RemoveContainer" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" Jan 26 23:31:16 crc kubenswrapper[4995]: E0126 23:31:16.040834 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110\": container with ID starting with d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110 not found: ID does not exist" containerID="d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.040873 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110"} err="failed to get container status \"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110\": rpc error: code = NotFound desc = could not find container \"d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110\": container with ID starting with d101bc15d5167fde36eceae416132c516f976792121c0a1cba1ece460b39a110 not found: ID does not exist" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.044423 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.047937 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.055754 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "284fb412-d705-4c0a-b11d-74f9074a9b6c" (UID: "284fb412-d705-4c0a-b11d-74f9074a9b6c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056044 4995 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056073 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w7l9\" (UniqueName: \"kubernetes.io/projected/284fb412-d705-4c0a-b11d-74f9074a9b6c-kube-api-access-7w7l9\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056088 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056451 4995 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056521 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056576 4995 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056638 4995 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.056691 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/284fb412-d705-4c0a-b11d-74f9074a9b6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:16 crc kubenswrapper[4995]: I0126 23:31:16.960366 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/keystone-7cb4bf847-27cbg" Jan 26 23:31:17 crc kubenswrapper[4995]: I0126 23:31:17.000215 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:31:17 crc kubenswrapper[4995]: I0126 23:31:17.007205 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-7cb4bf847-27cbg"] Jan 26 23:31:18 crc kubenswrapper[4995]: I0126 23:31:18.528558 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" path="/var/lib/kubelet/pods/284fb412-d705-4c0a-b11d-74f9074a9b6c/volumes" Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.519543 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.519866 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969310 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969618 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" containerID="cri-o://e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" gracePeriod=30 Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969776 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" containerID="cri-o://5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" gracePeriod=30 Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969824 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" containerID="cri-o://27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" gracePeriod=30 Jan 26 23:31:19 crc kubenswrapper[4995]: I0126 23:31:19.969872 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" containerID="cri-o://c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" gracePeriod=30 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.579861 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2nqkb" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" probeResult="failure" output=< Jan 26 23:31:20 crc kubenswrapper[4995]: timeout: failed to connect service ":50051" within 1s Jan 26 23:31:20 crc kubenswrapper[4995]: > Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.965093 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997139 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" exitCode=0 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997241 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997316 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997368 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997388 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997626 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" exitCode=2 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997649 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" exitCode=0 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997656 4995 generic.go:334] "Generic (PLEG): container finished" podID="e2a868ee-449d-451a-8f70-ec5800231c45" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" exitCode=0 Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997674 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997709 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} Jan 26 23:31:20 crc kubenswrapper[4995]: I0126 23:31:20.997719 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"e2a868ee-449d-451a-8f70-ec5800231c45","Type":"ContainerDied","Data":"c97dd3f359f663362140f94ede7f9243c229adb056421298fec32584624f10b0"} Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.018316 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040002 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040179 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040234 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040300 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040330 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040366 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040392 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.040418 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") pod \"e2a868ee-449d-451a-8f70-ec5800231c45\" (UID: \"e2a868ee-449d-451a-8f70-ec5800231c45\") " Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.043212 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.044060 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.044983 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.050221 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts" (OuterVolumeSpecName: "scripts") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.050374 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l" (OuterVolumeSpecName: "kube-api-access-9wz2l") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "kube-api-access-9wz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.066256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.067571 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.084590 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.085064 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085116 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085143 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.085573 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085599 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085616 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.085892 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085919 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.085935 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.087332 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087368 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087410 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087704 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087752 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.087987 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088010 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088240 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088265 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088490 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088514 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088718 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088739 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088935 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.088956 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089210 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089233 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089432 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089455 4995 scope.go:117] "RemoveContainer" containerID="5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089664 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a"} err="failed to get container status \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": rpc error: code = NotFound desc = could not find container \"5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a\": container with ID starting with 5f84d436c596ca5d8fcc9562e64ecac0db638eba31f15b14a3164214d898b94a not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089686 4995 scope.go:117] "RemoveContainer" containerID="27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089911 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06"} err="failed to get container status \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": rpc error: code = NotFound desc = could not find container \"27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06\": container with ID starting with 27bcff3006556609441bf1c5aa4f73feaa0d26541edf77f4714ab3d54604cb06 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.089933 4995 scope.go:117] "RemoveContainer" containerID="c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.090174 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0"} err="failed to get container status \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": rpc error: code = NotFound desc = could not find container \"c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0\": container with ID starting with c2cf4272a52cfb728f0344d44380f3814bfdcb55578aafaabae427a7681fd0e0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.090197 4995 scope.go:117] "RemoveContainer" containerID="e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.090420 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0"} err="failed to get container status \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": rpc error: code = NotFound desc = could not find container \"e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0\": container with ID starting with e3425f051b6b19efa24682db9704df47e31ff7cd83679655f6891df58f9a51a0 not found: ID does not exist" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.096821 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.115268 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.131592 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data" (OuterVolumeSpecName: "config-data") pod "e2a868ee-449d-451a-8f70-ec5800231c45" (UID: "e2a868ee-449d-451a-8f70-ec5800231c45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142666 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142698 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142707 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142719 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142727 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2a868ee-449d-451a-8f70-ec5800231c45-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142736 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142767 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2a868ee-449d-451a-8f70-ec5800231c45-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.142794 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wz2l\" (UniqueName: \"kubernetes.io/projected/e2a868ee-449d-451a-8f70-ec5800231c45-kube-api-access-9wz2l\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.333433 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.341985 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.361687 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362079 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362157 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362180 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362188 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362203 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362212 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362230 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362239 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: E0126 23:31:21.362270 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362278 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362491 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="proxy-httpd" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362512 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-notification-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362528 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="284fb412-d705-4c0a-b11d-74f9074a9b6c" containerName="keystone-api" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362540 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="sg-core" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.362554 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" containerName="ceilometer-central-agent" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.364339 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.366095 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.366408 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.368720 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.371368 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446732 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446776 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446804 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446828 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446843 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446870 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446917 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.446942 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547860 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547916 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.547988 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548047 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548094 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548145 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548585 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.548839 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.552317 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.552578 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.552757 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.553085 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.559081 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.569570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"ceilometer-0\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:21 crc kubenswrapper[4995]: I0126 23:31:21.682398 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:22 crc kubenswrapper[4995]: I0126 23:31:22.172293 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:22 crc kubenswrapper[4995]: I0126 23:31:22.527394 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2a868ee-449d-451a-8f70-ec5800231c45" path="/var/lib/kubelet/pods/e2a868ee-449d-451a-8f70-ec5800231c45/volumes" Jan 26 23:31:23 crc kubenswrapper[4995]: I0126 23:31:23.024996 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296"} Jan 26 23:31:23 crc kubenswrapper[4995]: I0126 23:31:23.025380 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"f905ef058c30c4fbd868dd5ef0e469d865481d80ace2b60c0d346ab24f53efa4"} Jan 26 23:31:24 crc kubenswrapper[4995]: I0126 23:31:24.037138 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246"} Jan 26 23:31:25 crc kubenswrapper[4995]: I0126 23:31:25.049289 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624"} Jan 26 23:31:26 crc kubenswrapper[4995]: I0126 23:31:26.063691 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerStarted","Data":"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249"} Jan 26 23:31:26 crc kubenswrapper[4995]: I0126 23:31:26.064066 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:26 crc kubenswrapper[4995]: I0126 23:31:26.090471 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.936746457 podStartE2EDuration="5.090456716s" podCreationTimestamp="2026-01-26 23:31:21 +0000 UTC" firstStartedPulling="2026-01-26 23:31:22.178671283 +0000 UTC m=+1386.343378748" lastFinishedPulling="2026-01-26 23:31:25.332381542 +0000 UTC m=+1389.497089007" observedRunningTime="2026-01-26 23:31:26.088542628 +0000 UTC m=+1390.253250093" watchObservedRunningTime="2026-01-26 23:31:26.090456716 +0000 UTC m=+1390.255164181" Jan 26 23:31:29 crc kubenswrapper[4995]: I0126 23:31:29.564612 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:29 crc kubenswrapper[4995]: I0126 23:31:29.625841 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.139773 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.140549 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2nqkb" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" containerID="cri-o://d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" gracePeriod=2 Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.734803 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.787364 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") pod \"fabd6826-906b-4dfc-af45-6d64bacdd794\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.787497 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") pod \"fabd6826-906b-4dfc-af45-6d64bacdd794\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.791384 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") pod \"fabd6826-906b-4dfc-af45-6d64bacdd794\" (UID: \"fabd6826-906b-4dfc-af45-6d64bacdd794\") " Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.792094 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities" (OuterVolumeSpecName: "utilities") pod "fabd6826-906b-4dfc-af45-6d64bacdd794" (UID: "fabd6826-906b-4dfc-af45-6d64bacdd794"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.793551 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r" (OuterVolumeSpecName: "kube-api-access-l728r") pod "fabd6826-906b-4dfc-af45-6d64bacdd794" (UID: "fabd6826-906b-4dfc-af45-6d64bacdd794"). InnerVolumeSpecName "kube-api-access-l728r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.893532 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l728r\" (UniqueName: \"kubernetes.io/projected/fabd6826-906b-4dfc-af45-6d64bacdd794-kube-api-access-l728r\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.893568 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.933804 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fabd6826-906b-4dfc-af45-6d64bacdd794" (UID: "fabd6826-906b-4dfc-af45-6d64bacdd794"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:33 crc kubenswrapper[4995]: I0126 23:31:33.994783 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabd6826-906b-4dfc-af45-6d64bacdd794-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143395 4995 generic.go:334] "Generic (PLEG): container finished" podID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" exitCode=0 Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143474 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2nqkb" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143451 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16"} Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2nqkb" event={"ID":"fabd6826-906b-4dfc-af45-6d64bacdd794","Type":"ContainerDied","Data":"db9b7d03b326053c44e82afb8fe738958f31d4b689aa3b9b6e0d3e7411632e71"} Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.143688 4995 scope.go:117] "RemoveContainer" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.182419 4995 scope.go:117] "RemoveContainer" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.186196 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.193402 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2nqkb"] Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.217284 4995 scope.go:117] "RemoveContainer" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.236534 4995 scope.go:117] "RemoveContainer" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" Jan 26 23:31:34 crc kubenswrapper[4995]: E0126 23:31:34.237218 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16\": container with ID starting with d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16 not found: ID does not exist" containerID="d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237246 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16"} err="failed to get container status \"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16\": rpc error: code = NotFound desc = could not find container \"d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16\": container with ID starting with d98123160b69d4f0d6cbbc9abd21543e9b53a9f18c98b3b91a8ed45d2c3eff16 not found: ID does not exist" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237266 4995 scope.go:117] "RemoveContainer" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" Jan 26 23:31:34 crc kubenswrapper[4995]: E0126 23:31:34.237579 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae\": container with ID starting with 061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae not found: ID does not exist" containerID="061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237634 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae"} err="failed to get container status \"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae\": rpc error: code = NotFound desc = could not find container \"061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae\": container with ID starting with 061bea0efed4b870a1c93f92e2f1319276ff3c301fb648c403eac58ea696baae not found: ID does not exist" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.237667 4995 scope.go:117] "RemoveContainer" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" Jan 26 23:31:34 crc kubenswrapper[4995]: E0126 23:31:34.238092 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce\": container with ID starting with 227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce not found: ID does not exist" containerID="227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.238128 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce"} err="failed to get container status \"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce\": rpc error: code = NotFound desc = could not find container \"227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce\": container with ID starting with 227a08c7949920c190c0956e76d70cfb11cffd07bbe2af50ce313362a3c4e5ce not found: ID does not exist" Jan 26 23:31:34 crc kubenswrapper[4995]: I0126 23:31:34.532859 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" path="/var/lib/kubelet/pods/fabd6826-906b-4dfc-af45-6d64bacdd794/volumes" Jan 26 23:31:40 crc kubenswrapper[4995]: I0126 23:31:40.893989 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:31:40 crc kubenswrapper[4995]: I0126 23:31:40.894744 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:31:51 crc kubenswrapper[4995]: I0126 23:31:51.691696 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.844594 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.852936 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5tdfh"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.902233 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:53 crc kubenswrapper[4995]: E0126 23:31:53.903072 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-utilities" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903201 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-utilities" Jan 26 23:31:53 crc kubenswrapper[4995]: E0126 23:31:53.903318 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-content" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903395 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="extract-content" Jan 26 23:31:53 crc kubenswrapper[4995]: E0126 23:31:53.903482 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903561 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.903850 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabd6826-906b-4dfc-af45-6d64bacdd794" containerName="registry-server" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.904612 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.962162 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.969888 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.970163 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" containerID="cri-o://506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" gracePeriod=30 Jan 26 23:31:53 crc kubenswrapper[4995]: I0126 23:31:53.970537 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" containerID="cri-o://99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" gracePeriod=30 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.046290 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.046405 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.062174 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.062414 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" containerID="cri-o://6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922" gracePeriod=30 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.082426 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.082691 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" containerID="cri-o://a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" gracePeriod=30 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.147866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.148003 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.149009 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.167655 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"watcher26de-account-delete-9k8t5\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.222137 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.371666 4995 generic.go:334] "Generic (PLEG): container finished" podID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" exitCode=143 Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.371721 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerDied","Data":"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1"} Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.543675 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74804f16-0037-44f0-a6a5-71414a33cee2" path="/var/lib/kubelet/pods/74804f16-0037-44f0-a6a5-71414a33cee2/volumes" Jan 26 23:31:54 crc kubenswrapper[4995]: I0126 23:31:54.736816 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.299670 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372708 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372766 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372870 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372926 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.372961 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") pod \"dfd66ee0-752c-4d44-92e1-a287384642e2\" (UID: \"dfd66ee0-752c-4d44-92e1-a287384642e2\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.373628 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs" (OuterVolumeSpecName: "logs") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390702 4995 generic.go:334] "Generic (PLEG): container finished" podID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" exitCode=0 Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerDied","Data":"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390864 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"dfd66ee0-752c-4d44-92e1-a287384642e2","Type":"ContainerDied","Data":"9a7d0124cd7a5360719a3b66cdef998880be62eb0874154eac5904934bc66e9c"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.390886 4995 scope.go:117] "RemoveContainer" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.391092 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.401249 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.407524 4995 generic.go:334] "Generic (PLEG): container finished" podID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerID="92cc26c82a9b23a9721c60030809c14c060714d70c702de958b8d81f8d16479b" exitCode=0 Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.407647 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" event={"ID":"a31e3b1c-6d46-44f3-9dee-2e8652ca0807","Type":"ContainerDied","Data":"92cc26c82a9b23a9721c60030809c14c060714d70c702de958b8d81f8d16479b"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.407693 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" event={"ID":"a31e3b1c-6d46-44f3-9dee-2e8652ca0807","Type":"ContainerStarted","Data":"027bfffc02dc6c70e1854db7bc0d78b996ee8cb3498ef38d0dac117a85c79839"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.411251 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78" (OuterVolumeSpecName: "kube-api-access-xkr78") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "kube-api-access-xkr78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.411475 4995 generic.go:334] "Generic (PLEG): container finished" podID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerID="6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922" exitCode=0 Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.411607 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerDied","Data":"6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922"} Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.445187 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.449926 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data" (OuterVolumeSpecName: "config-data") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475377 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475501 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475558 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dfd66ee0-752c-4d44-92e1-a287384642e2-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475617 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkr78\" (UniqueName: \"kubernetes.io/projected/dfd66ee0-752c-4d44-92e1-a287384642e2-kube-api-access-xkr78\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.475673 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.481850 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "dfd66ee0-752c-4d44-92e1-a287384642e2" (UID: "dfd66ee0-752c-4d44-92e1-a287384642e2"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.495974 4995 scope.go:117] "RemoveContainer" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.525868 4995 scope.go:117] "RemoveContainer" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" Jan 26 23:31:55 crc kubenswrapper[4995]: E0126 23:31:55.526292 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6\": container with ID starting with 99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6 not found: ID does not exist" containerID="99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.526387 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6"} err="failed to get container status \"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6\": rpc error: code = NotFound desc = could not find container \"99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6\": container with ID starting with 99838399f8cea7a8777ded90914c12efbc086fb670c82ce6735e276f1b774fd6 not found: ID does not exist" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.526466 4995 scope.go:117] "RemoveContainer" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" Jan 26 23:31:55 crc kubenswrapper[4995]: E0126 23:31:55.527049 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1\": container with ID starting with 506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1 not found: ID does not exist" containerID="506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.527165 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1"} err="failed to get container status \"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1\": rpc error: code = NotFound desc = could not find container \"506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1\": container with ID starting with 506d9a96d2911a8ce04b48322a94456c89fa55e4536a6c663f8e1f0c6430aec1 not found: ID does not exist" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.577210 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dfd66ee0-752c-4d44-92e1-a287384642e2-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.591478 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678286 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678434 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678504 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678579 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678622 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678642 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs" (OuterVolumeSpecName: "logs") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.678676 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") pod \"118c105c-80f5-4d0f-94c2-17f3269025ca\" (UID: \"118c105c-80f5-4d0f-94c2-17f3269025ca\") " Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.679081 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/118c105c-80f5-4d0f-94c2-17f3269025ca-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.683533 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9" (OuterVolumeSpecName: "kube-api-access-7djc9") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "kube-api-access-7djc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.705209 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.711931 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.733305 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data" (OuterVolumeSpecName: "config-data") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.734147 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.741326 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "118c105c-80f5-4d0f-94c2-17f3269025ca" (UID: "118c105c-80f5-4d0f-94c2-17f3269025ca"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.743535 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781080 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781140 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781152 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781164 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7djc9\" (UniqueName: \"kubernetes.io/projected/118c105c-80f5-4d0f-94c2-17f3269025ca-kube-api-access-7djc9\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:55 crc kubenswrapper[4995]: I0126 23:31:55.781176 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/118c105c-80f5-4d0f-94c2-17f3269025ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.430010 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.430008 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"118c105c-80f5-4d0f-94c2-17f3269025ca","Type":"ContainerDied","Data":"6780d1fd068d258f993980132b0b6bd2df34b9245721daf2a80227aeaf1d0ca4"} Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.430144 4995 scope.go:117] "RemoveContainer" containerID="6afd5efd4dcf18121a5fd9c8de3507a46a5319c8e70b9ca7bc1a4ac45736a922" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.491667 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.508870 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.542412 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" path="/var/lib/kubelet/pods/118c105c-80f5-4d0f-94c2-17f3269025ca/volumes" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.543427 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" path="/var/lib/kubelet/pods/dfd66ee0-752c-4d44-92e1-a287384642e2/volumes" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.857660 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.903830 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") pod \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.903897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") pod \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\" (UID: \"a31e3b1c-6d46-44f3-9dee-2e8652ca0807\") " Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.904809 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a31e3b1c-6d46-44f3-9dee-2e8652ca0807" (UID: "a31e3b1c-6d46-44f3-9dee-2e8652ca0807"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:31:56 crc kubenswrapper[4995]: I0126 23:31:56.908662 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t" (OuterVolumeSpecName: "kube-api-access-75z2t") pod "a31e3b1c-6d46-44f3-9dee-2e8652ca0807" (UID: "a31e3b1c-6d46-44f3-9dee-2e8652ca0807"). InnerVolumeSpecName "kube-api-access-75z2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.006259 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.006313 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75z2t\" (UniqueName: \"kubernetes.io/projected/a31e3b1c-6d46-44f3-9dee-2e8652ca0807-kube-api-access-75z2t\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.450676 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.454755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" event={"ID":"a31e3b1c-6d46-44f3-9dee-2e8652ca0807","Type":"ContainerDied","Data":"027bfffc02dc6c70e1854db7bc0d78b996ee8cb3498ef38d0dac117a85c79839"} Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.454799 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="027bfffc02dc6c70e1854db7bc0d78b996ee8cb3498ef38d0dac117a85c79839" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.454858 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher26de-account-delete-9k8t5" Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458301 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" containerID="cri-o://cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" gracePeriod=30 Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458326 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" containerID="cri-o://3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" gracePeriod=30 Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458460 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" containerID="cri-o://f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" gracePeriod=30 Jan 26 23:31:57 crc kubenswrapper[4995]: I0126 23:31:57.458454 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" containerID="cri-o://fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" gracePeriod=30 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.199201 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.332725 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333070 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333116 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333171 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333238 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") pod \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\" (UID: \"77a1e608-88ba-44dc-a4fd-86bd6bd980c1\") " Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.333749 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs" (OuterVolumeSpecName: "logs") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.338311 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj" (OuterVolumeSpecName: "kube-api-access-cmhpj") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "kube-api-access-cmhpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.355729 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.371342 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data" (OuterVolumeSpecName: "config-data") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.403009 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "77a1e608-88ba-44dc-a4fd-86bd6bd980c1" (UID: "77a1e608-88ba-44dc-a4fd-86bd6bd980c1"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434675 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434710 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434722 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434732 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.434741 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmhpj\" (UniqueName: \"kubernetes.io/projected/77a1e608-88ba-44dc-a4fd-86bd6bd980c1-kube-api-access-cmhpj\") on node \"crc\" DevicePath \"\"" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462549 4995 generic.go:334] "Generic (PLEG): container finished" podID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" exitCode=0 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462608 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462619 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerDied","Data":"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462647 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"77a1e608-88ba-44dc-a4fd-86bd6bd980c1","Type":"ContainerDied","Data":"c39928c36c8af1a9535983a878c5e72ae844418dbec585db7b98acb4c5ad7317"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.462666 4995 scope.go:117] "RemoveContainer" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465682 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" exitCode=0 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465706 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" exitCode=2 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465717 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" exitCode=0 Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465736 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465769 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.465778 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296"} Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.493789 4995 scope.go:117] "RemoveContainer" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.493938 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:31:58 crc kubenswrapper[4995]: E0126 23:31:58.494788 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6\": container with ID starting with a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6 not found: ID does not exist" containerID="a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.494865 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6"} err="failed to get container status \"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6\": rpc error: code = NotFound desc = could not find container \"a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6\": container with ID starting with a05a773aa00ba14b5cc811af4a6066e26372cd17bc2721a24de9ad9bff3249b6 not found: ID does not exist" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.502548 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.526487 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" path="/var/lib/kubelet/pods/77a1e608-88ba-44dc-a4fd-86bd6bd980c1/volumes" Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.949776 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.964694 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8r7vh"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.977598 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.991212 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:58 crc kubenswrapper[4995]: I0126 23:31:58.999604 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher26de-account-delete-9k8t5"] Jan 26 23:31:59 crc kubenswrapper[4995]: I0126 23:31:59.005759 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-26de-account-create-update-h8699"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.006083 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092270 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092420 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092465 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092697 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092763 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092792 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092821 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.092850 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") pod \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\" (UID: \"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe\") " Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.093389 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.093590 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.103399 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts" (OuterVolumeSpecName: "scripts") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.114870 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx" (OuterVolumeSpecName: "kube-api-access-km5mx") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "kube-api-access-km5mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.168551 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.189876 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.191451 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194242 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194274 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194285 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194298 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194310 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km5mx\" (UniqueName: \"kubernetes.io/projected/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-kube-api-access-km5mx\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194319 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.194328 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.217480 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data" (OuterVolumeSpecName: "config-data") pod "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" (UID: "6e380e3b-629e-428a-b7d0-cbfb05b6b1fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.296276 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369196 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369585 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369608 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369623 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369632 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369647 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369656 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369673 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369680 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369709 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369717 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369734 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369742 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369758 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369765 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369778 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerName="mariadb-account-delete" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369786 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerName="mariadb-account-delete" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.369797 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369805 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369975 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="118c105c-80f5-4d0f-94c2-17f3269025ca" containerName="watcher-decision-engine" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.369994 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-central-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370008 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-kuttl-api-log" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370019 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="77a1e608-88ba-44dc-a4fd-86bd6bd980c1" containerName="watcher-applier" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370031 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" containerName="mariadb-account-delete" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370042 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="ceilometer-notification-agent" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370049 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="sg-core" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370061 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfd66ee0-752c-4d44-92e1-a287384642e2" containerName="watcher-api" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370072 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerName="proxy-httpd" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.370801 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.381649 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.473590 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.474738 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.477841 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490067 4995 generic.go:334] "Generic (PLEG): container finished" podID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" exitCode=0 Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490150 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490155 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246"} Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490234 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"6e380e3b-629e-428a-b7d0-cbfb05b6b1fe","Type":"ContainerDied","Data":"f905ef058c30c4fbd868dd5ef0e469d865481d80ace2b60c0d346ab24f53efa4"} Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.490266 4995 scope.go:117] "RemoveContainer" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.499355 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.499436 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.505286 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.523069 4995 scope.go:117] "RemoveContainer" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.539951 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360b1483-8046-4c4c-920d-69387e2fbbed" path="/var/lib/kubelet/pods/360b1483-8046-4c4c-920d-69387e2fbbed/volumes" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.540753 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31e3b1c-6d46-44f3-9dee-2e8652ca0807" path="/var/lib/kubelet/pods/a31e3b1c-6d46-44f3-9dee-2e8652ca0807/volumes" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.541842 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab224b66-6f5e-4e78-bdc4-e913dcb2250a" path="/var/lib/kubelet/pods/ab224b66-6f5e-4e78-bdc4-e913dcb2250a/volumes" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.542537 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.560448 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.566936 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.567754 4995 scope.go:117] "RemoveContainer" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.572557 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.573721 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.575559 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.576425 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.576302 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.603831 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.603910 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.603957 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.604146 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.604901 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.616436 4995 scope.go:117] "RemoveContainer" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.622912 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"watcher-db-create-8jh4s\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.644387 4995 scope.go:117] "RemoveContainer" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.645206 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249\": container with ID starting with f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249 not found: ID does not exist" containerID="f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.645236 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249"} err="failed to get container status \"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249\": rpc error: code = NotFound desc = could not find container \"f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249\": container with ID starting with f35357d14e594214ca221929223c197a44732a1fae2f5c8f55cc60606e1a4249 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.645257 4995 scope.go:117] "RemoveContainer" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.646394 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624\": container with ID starting with 3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624 not found: ID does not exist" containerID="3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.646462 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624"} err="failed to get container status \"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624\": rpc error: code = NotFound desc = could not find container \"3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624\": container with ID starting with 3eabb53f40754235e8f584f7ff7e04f41331ae698755bd28e7a0fa11eb232624 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.646521 4995 scope.go:117] "RemoveContainer" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.647144 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246\": container with ID starting with fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246 not found: ID does not exist" containerID="fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.647166 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246"} err="failed to get container status \"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246\": rpc error: code = NotFound desc = could not find container \"fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246\": container with ID starting with fafd4c14da4a2ba1b84cd28ed159d6648bd59f4390c3b9d28d83bac7b1ced246 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.647183 4995 scope.go:117] "RemoveContainer" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" Jan 26 23:32:00 crc kubenswrapper[4995]: E0126 23:32:00.647540 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296\": container with ID starting with cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296 not found: ID does not exist" containerID="cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.647565 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296"} err="failed to get container status \"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296\": rpc error: code = NotFound desc = could not find container \"cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296\": container with ID starting with cac25dd20a68e675b47b35475da80411652636ef6d4f7b3b0be1b6ac12350296 not found: ID does not exist" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.688163 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705249 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705337 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705370 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705407 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705540 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705614 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705681 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705839 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.705992 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.706160 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.707011 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.723937 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"watcher-fed2-account-create-update-xlm64\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.791919 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.809911 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810240 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810274 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810311 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810339 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810394 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810416 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.810434 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.819209 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.821618 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.822134 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.826734 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.827078 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.833738 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.836240 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.850193 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"ceilometer-0\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:00 crc kubenswrapper[4995]: I0126 23:32:00.910866 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.367328 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.512803 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8jh4s" event={"ID":"e5c0fe20-e8a0-4e46-889c-f7484847605c","Type":"ContainerStarted","Data":"3e32960f3dc06b2e789bd686d4b6b46e8c8920f01793f07e702510a396be236a"} Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.559462 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:01 crc kubenswrapper[4995]: W0126 23:32:01.561522 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb49764a_224f_47ef_ad9b_016ac609fc81.slice/crio-6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286 WatchSource:0}: Error finding container 6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286: Status 404 returned error can't find the container with id 6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286 Jan 26 23:32:01 crc kubenswrapper[4995]: I0126 23:32:01.718743 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:01 crc kubenswrapper[4995]: W0126 23:32:01.719719 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19dab45_658a_43e5_93f9_7405f4e265b8.slice/crio-058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc WatchSource:0}: Error finding container 058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc: Status 404 returned error can't find the container with id 058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.523789 4995 generic.go:334] "Generic (PLEG): container finished" podID="db49764a-224f-47ef-ad9b-016ac609fc81" containerID="42bdbf79e7939fb4f6bd922600909eb049e24579c79123df69d4d9b5938f3988" exitCode=0 Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.527499 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e380e3b-629e-428a-b7d0-cbfb05b6b1fe" path="/var/lib/kubelet/pods/6e380e3b-629e-428a-b7d0-cbfb05b6b1fe/volumes" Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528349 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" event={"ID":"db49764a-224f-47ef-ad9b-016ac609fc81","Type":"ContainerDied","Data":"42bdbf79e7939fb4f6bd922600909eb049e24579c79123df69d4d9b5938f3988"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528385 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" event={"ID":"db49764a-224f-47ef-ad9b-016ac609fc81","Type":"ContainerStarted","Data":"6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528401 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528415 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc"} Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528481 4995 generic.go:334] "Generic (PLEG): container finished" podID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerID="9c92253ce611dea0df9e21427e5984e7db9bccf73045bb24769fa3dbad187a39" exitCode=0 Jan 26 23:32:02 crc kubenswrapper[4995]: I0126 23:32:02.528521 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8jh4s" event={"ID":"e5c0fe20-e8a0-4e46-889c-f7484847605c","Type":"ContainerDied","Data":"9c92253ce611dea0df9e21427e5984e7db9bccf73045bb24769fa3dbad187a39"} Jan 26 23:32:03 crc kubenswrapper[4995]: I0126 23:32:03.538154 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.150385 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.158916 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.293726 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") pod \"e5c0fe20-e8a0-4e46-889c-f7484847605c\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.293833 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") pod \"db49764a-224f-47ef-ad9b-016ac609fc81\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.293866 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") pod \"e5c0fe20-e8a0-4e46-889c-f7484847605c\" (UID: \"e5c0fe20-e8a0-4e46-889c-f7484847605c\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.294018 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") pod \"db49764a-224f-47ef-ad9b-016ac609fc81\" (UID: \"db49764a-224f-47ef-ad9b-016ac609fc81\") " Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.295213 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db49764a-224f-47ef-ad9b-016ac609fc81" (UID: "db49764a-224f-47ef-ad9b-016ac609fc81"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.295554 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e5c0fe20-e8a0-4e46-889c-f7484847605c" (UID: "e5c0fe20-e8a0-4e46-889c-f7484847605c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.299903 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl" (OuterVolumeSpecName: "kube-api-access-qqhxl") pod "db49764a-224f-47ef-ad9b-016ac609fc81" (UID: "db49764a-224f-47ef-ad9b-016ac609fc81"). InnerVolumeSpecName "kube-api-access-qqhxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.300564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs" (OuterVolumeSpecName: "kube-api-access-hwhjs") pod "e5c0fe20-e8a0-4e46-889c-f7484847605c" (UID: "e5c0fe20-e8a0-4e46-889c-f7484847605c"). InnerVolumeSpecName "kube-api-access-hwhjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395883 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e5c0fe20-e8a0-4e46-889c-f7484847605c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395925 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqhxl\" (UniqueName: \"kubernetes.io/projected/db49764a-224f-47ef-ad9b-016ac609fc81-kube-api-access-qqhxl\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395936 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwhjs\" (UniqueName: \"kubernetes.io/projected/e5c0fe20-e8a0-4e46-889c-f7484847605c-kube-api-access-hwhjs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.395945 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db49764a-224f-47ef-ad9b-016ac609fc81-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.552821 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.552860 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-fed2-account-create-update-xlm64" event={"ID":"db49764a-224f-47ef-ad9b-016ac609fc81","Type":"ContainerDied","Data":"6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.552908 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da1901ade9731de525718ca7906515f7aba2caceef37214aa1c0fbe0c2a4286" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.557885 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.561587 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-8jh4s" event={"ID":"e5c0fe20-e8a0-4e46-889c-f7484847605c","Type":"ContainerDied","Data":"3e32960f3dc06b2e789bd686d4b6b46e8c8920f01793f07e702510a396be236a"} Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.561610 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e32960f3dc06b2e789bd686d4b6b46e8c8920f01793f07e702510a396be236a" Jan 26 23:32:04 crc kubenswrapper[4995]: I0126 23:32:04.561690 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-8jh4s" Jan 26 23:32:05 crc kubenswrapper[4995]: I0126 23:32:05.572386 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerStarted","Data":"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c"} Jan 26 23:32:05 crc kubenswrapper[4995]: I0126 23:32:05.572600 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:05 crc kubenswrapper[4995]: I0126 23:32:05.604839 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.14724098 podStartE2EDuration="5.604822972s" podCreationTimestamp="2026-01-26 23:32:00 +0000 UTC" firstStartedPulling="2026-01-26 23:32:01.72187806 +0000 UTC m=+1425.886585525" lastFinishedPulling="2026-01-26 23:32:05.179460052 +0000 UTC m=+1429.344167517" observedRunningTime="2026-01-26 23:32:05.601458597 +0000 UTC m=+1429.766166062" watchObservedRunningTime="2026-01-26 23:32:05.604822972 +0000 UTC m=+1429.769530437" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.182807 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:06 crc kubenswrapper[4995]: E0126 23:32:06.183159 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" containerName="mariadb-account-create-update" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183185 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" containerName="mariadb-account-create-update" Jan 26 23:32:06 crc kubenswrapper[4995]: E0126 23:32:06.183215 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerName="mariadb-database-create" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183223 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerName="mariadb-database-create" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183381 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" containerName="mariadb-account-create-update" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.183394 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" containerName="mariadb-database-create" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.184007 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.186829 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rmsbz" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.187166 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.193758 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226534 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226606 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.226806 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328628 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328689 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328742 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.328776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.333775 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.335818 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.344406 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.344812 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"watcher-kuttl-db-sync-5mjdk\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.506352 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:06 crc kubenswrapper[4995]: I0126 23:32:06.981960 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:06 crc kubenswrapper[4995]: W0126 23:32:06.984488 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc265038c_ebe8_4aa1_acda_f45361fbd885.slice/crio-9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1 WatchSource:0}: Error finding container 9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1: Status 404 returned error can't find the container with id 9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1 Jan 26 23:32:07 crc kubenswrapper[4995]: I0126 23:32:07.611719 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerStarted","Data":"0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427"} Jan 26 23:32:07 crc kubenswrapper[4995]: I0126 23:32:07.612014 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerStarted","Data":"9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1"} Jan 26 23:32:07 crc kubenswrapper[4995]: I0126 23:32:07.629567 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" podStartSLOduration=1.6295508650000001 podStartE2EDuration="1.629550865s" podCreationTimestamp="2026-01-26 23:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:07.627118894 +0000 UTC m=+1431.791826359" watchObservedRunningTime="2026-01-26 23:32:07.629550865 +0000 UTC m=+1431.794258320" Jan 26 23:32:09 crc kubenswrapper[4995]: I0126 23:32:09.628689 4995 generic.go:334] "Generic (PLEG): container finished" podID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerID="0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427" exitCode=0 Jan 26 23:32:09 crc kubenswrapper[4995]: I0126 23:32:09.628813 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerDied","Data":"0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427"} Jan 26 23:32:10 crc kubenswrapper[4995]: I0126 23:32:10.893796 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:32:10 crc kubenswrapper[4995]: I0126 23:32:10.894297 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.000153 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007347 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007387 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007436 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.007455 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") pod \"c265038c-ebe8-4aa1-acda-f45361fbd885\" (UID: \"c265038c-ebe8-4aa1-acda-f45361fbd885\") " Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.017434 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x" (OuterVolumeSpecName: "kube-api-access-gfj9x") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "kube-api-access-gfj9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.018514 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.050779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.066052 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data" (OuterVolumeSpecName: "config-data") pod "c265038c-ebe8-4aa1-acda-f45361fbd885" (UID: "c265038c-ebe8-4aa1-acda-f45361fbd885"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.108486 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.108707 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.108926 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfj9x\" (UniqueName: \"kubernetes.io/projected/c265038c-ebe8-4aa1-acda-f45361fbd885-kube-api-access-gfj9x\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.109057 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c265038c-ebe8-4aa1-acda-f45361fbd885-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.647826 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" event={"ID":"c265038c-ebe8-4aa1-acda-f45361fbd885","Type":"ContainerDied","Data":"9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1"} Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.647864 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc2f1c990b94f43c5a8a87338b008b87968e70be2f3c1e2f10cae06d5a708d1" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.647928 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.927148 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:11 crc kubenswrapper[4995]: E0126 23:32:11.928645 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerName="watcher-kuttl-db-sync" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.928776 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerName="watcher-kuttl-db-sync" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.929072 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" containerName="watcher-kuttl-db-sync" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.930343 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.932422 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-rmsbz" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.932429 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.941709 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.956333 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.970838 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.974725 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:32:11 crc kubenswrapper[4995]: I0126 23:32:11.980430 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.022677 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.022968 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023054 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023165 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023252 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023356 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023444 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023538 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023630 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023818 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.023897 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.034075 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.035007 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.038042 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.050289 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125518 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125564 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125631 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125658 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125675 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125696 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125713 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.125975 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126095 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126153 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126172 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126195 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126266 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126310 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126459 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126503 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.126551 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.130015 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.131597 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.131735 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.132317 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.132698 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.133763 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.135926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.136534 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.144886 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"watcher-kuttl-api-0\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.145182 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227688 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227764 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227819 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.227847 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.228350 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.228362 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.231825 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.231939 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.232390 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.246808 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"watcher-kuttl-applier-0\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.249657 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.291520 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.348529 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.719184 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.851683 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: I0126 23:32:12.925608 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:12 crc kubenswrapper[4995]: W0126 23:32:12.958152 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55417497_6ca7_42c8_ba53_58da68837328.slice/crio-a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf WatchSource:0}: Error finding container a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf: Status 404 returned error can't find the container with id a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.665210 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerStarted","Data":"b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.665616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerStarted","Data":"a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666464 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerStarted","Data":"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666506 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerStarted","Data":"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666516 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerStarted","Data":"68985bac7f5331913df7825c5a17b60e9e19f5cb1a899bdf14134c0eda5b546b"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.666654 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.667785 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerStarted","Data":"6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.667896 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerStarted","Data":"fe618fa252c29164da67ca6fb2b81b5cfcd348cb451091a77890f43ff25b2bdf"} Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.686638 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.686616847 podStartE2EDuration="2.686616847s" podCreationTimestamp="2026-01-26 23:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:13.680964115 +0000 UTC m=+1437.845671580" watchObservedRunningTime="2026-01-26 23:32:13.686616847 +0000 UTC m=+1437.851324312" Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.701849 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.701828168 podStartE2EDuration="2.701828168s" podCreationTimestamp="2026-01-26 23:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:13.701009477 +0000 UTC m=+1437.865716952" watchObservedRunningTime="2026-01-26 23:32:13.701828168 +0000 UTC m=+1437.866535633" Jan 26 23:32:13 crc kubenswrapper[4995]: I0126 23:32:13.722600 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=1.722577517 podStartE2EDuration="1.722577517s" podCreationTimestamp="2026-01-26 23:32:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:13.714572047 +0000 UTC m=+1437.879279522" watchObservedRunningTime="2026-01-26 23:32:13.722577517 +0000 UTC m=+1437.887284972" Jan 26 23:32:15 crc kubenswrapper[4995]: I0126 23:32:15.827165 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:17 crc kubenswrapper[4995]: I0126 23:32:17.249830 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:17 crc kubenswrapper[4995]: I0126 23:32:17.349881 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.250704 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.292363 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.349792 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.402780 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.402842 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.437991 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.757281 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.768905 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.786697 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:22 crc kubenswrapper[4995]: I0126 23:32:22.799885 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.767741 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.768076 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" containerID="cri-o://16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.769529 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" containerID="cri-o://380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.769612 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" containerID="cri-o://148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.769531 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" containerID="cri-o://d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" gracePeriod=30 Jan 26 23:32:24 crc kubenswrapper[4995]: I0126 23:32:24.782207 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.168:3000/\": read tcp 10.217.0.2:45262->10.217.0.168:3000: read: connection reset by peer" Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.795844 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" exitCode=0 Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796201 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" exitCode=2 Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796215 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" exitCode=0 Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.795947 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c"} Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796258 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800"} Jan 26 23:32:25 crc kubenswrapper[4995]: I0126 23:32:25.796277 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb"} Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.729022 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.746626 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-5mjdk"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.799170 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.800500 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.816321 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.846972 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.847172 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" containerID="cri-o://b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986" gracePeriod=30 Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.912814 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.913091 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.930346 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.930626 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" containerID="cri-o://1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" gracePeriod=30 Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.930963 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" containerID="cri-o://d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" gracePeriod=30 Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.975144 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:28 crc kubenswrapper[4995]: I0126 23:32:28.975339 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" containerID="cri-o://6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc" gracePeriod=30 Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.018976 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.019292 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.020074 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.048232 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"watcherfed2-account-delete-vsqtt\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.124929 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.612763 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.694020 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736047 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736435 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736464 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736511 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736587 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736642 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736700 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736801 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") pod \"a19dab45-658a-43e5-93f9-7405f4e265b8\" (UID: \"a19dab45-658a-43e5-93f9-7405f4e265b8\") " Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.736817 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.739347 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.739373 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a19dab45-658a-43e5-93f9-7405f4e265b8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.743756 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb" (OuterVolumeSpecName: "kube-api-access-rdltb") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "kube-api-access-rdltb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.744303 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts" (OuterVolumeSpecName: "scripts") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.785262 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.805758 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.837939 4995 generic.go:334] "Generic (PLEG): container finished" podID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" exitCode=143 Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.838018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerDied","Data":"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.839472 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerStarted","Data":"277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.839515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerStarted","Data":"9d8255fbbc8921fee6dd6a4844a76364f4032c5800dd2e1becc6405460f84172"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.841246 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.841422 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.843177 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.843211 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdltb\" (UniqueName: \"kubernetes.io/projected/a19dab45-658a-43e5-93f9-7405f4e265b8-kube-api-access-rdltb\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.843223 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845831 4995 generic.go:334] "Generic (PLEG): container finished" podID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" exitCode=0 Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845873 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845906 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a19dab45-658a-43e5-93f9-7405f4e265b8","Type":"ContainerDied","Data":"058946e2dc5a1942a8072bf11dde6ef901ddf82c71950b72a1b542ae3f72abdc"} Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.845924 4995 scope.go:117] "RemoveContainer" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.846443 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.862676 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" podStartSLOduration=1.862654748 podStartE2EDuration="1.862654748s" podCreationTimestamp="2026-01-26 23:32:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:29.852844012 +0000 UTC m=+1454.017551477" watchObservedRunningTime="2026-01-26 23:32:29.862654748 +0000 UTC m=+1454.027362213" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.865222 4995 scope.go:117] "RemoveContainer" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.901950 4995 scope.go:117] "RemoveContainer" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.911392 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data" (OuterVolumeSpecName: "config-data") pod "a19dab45-658a-43e5-93f9-7405f4e265b8" (UID: "a19dab45-658a-43e5-93f9-7405f4e265b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.927827 4995 scope.go:117] "RemoveContainer" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.944278 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.944314 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a19dab45-658a-43e5-93f9-7405f4e265b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.957221 4995 scope.go:117] "RemoveContainer" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.957686 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c\": container with ID starting with d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c not found: ID does not exist" containerID="d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.957754 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c"} err="failed to get container status \"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c\": rpc error: code = NotFound desc = could not find container \"d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c\": container with ID starting with d00e2c2bc05adf939f5d6f08bc3c1dd56d7cce4c5bcd7f7802dab5c433cccf6c not found: ID does not exist" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.957783 4995 scope.go:117] "RemoveContainer" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.958153 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800\": container with ID starting with 148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800 not found: ID does not exist" containerID="148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958201 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800"} err="failed to get container status \"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800\": rpc error: code = NotFound desc = could not find container \"148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800\": container with ID starting with 148bb5affde29cd4f3e292fd21f0cbe911dfae97e55feef92b1097dcba022800 not found: ID does not exist" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958239 4995 scope.go:117] "RemoveContainer" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.958593 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180\": container with ID starting with 380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180 not found: ID does not exist" containerID="380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958650 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180"} err="failed to get container status \"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180\": rpc error: code = NotFound desc = could not find container \"380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180\": container with ID starting with 380d2ad7810df87210451dc7828952e305bed4e6be389c226f196789c8140180 not found: ID does not exist" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958684 4995 scope.go:117] "RemoveContainer" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" Jan 26 23:32:29 crc kubenswrapper[4995]: E0126 23:32:29.958946 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb\": container with ID starting with 16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb not found: ID does not exist" containerID="16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb" Jan 26 23:32:29 crc kubenswrapper[4995]: I0126 23:32:29.958970 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb"} err="failed to get container status \"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb\": rpc error: code = NotFound desc = could not find container \"16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb\": container with ID starting with 16c7cee2ba649c59891ef832d9356c33d12341f056f2b5013ed9161e2e05b6cb not found: ID does not exist" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.196848 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.203152 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235472 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235883 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235906 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235924 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235932 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235952 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235961 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.235977 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.235985 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236211 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-central-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236229 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="proxy-httpd" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236240 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="ceilometer-notification-agent" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.236259 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" containerName="sg-core" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.238054 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.242806 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.243002 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.243038 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.251758 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350260 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350305 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350358 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350414 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350548 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350618 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350648 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.350673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452286 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452370 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452402 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452431 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452506 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.452530 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453271 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453398 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453781 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.453798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.458651 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.464709 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.466498 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.470328 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.474647 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.475944 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"ceilometer-0\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.538428 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19dab45-658a-43e5-93f9-7405f4e265b8" path="/var/lib/kubelet/pods/a19dab45-658a-43e5-93f9-7405f4e265b8/volumes" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.539307 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c265038c-ebe8-4aa1-acda-f45361fbd885" path="/var/lib/kubelet/pods/c265038c-ebe8-4aa1-acda-f45361fbd885/volumes" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.554634 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.616089 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656134 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656704 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656766 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656835 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656897 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.656968 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") pod \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\" (UID: \"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c\") " Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.660408 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs" (OuterVolumeSpecName: "logs") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.663622 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s" (OuterVolumeSpecName: "kube-api-access-lpw9s") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "kube-api-access-lpw9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.683325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.703471 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data" (OuterVolumeSpecName: "config-data") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.717243 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.741557 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" (UID: "2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761286 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpw9s\" (UniqueName: \"kubernetes.io/projected/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-kube-api-access-lpw9s\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761321 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761334 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761346 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761358 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.761371 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.858644 4995 generic.go:334] "Generic (PLEG): container finished" podID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerID="277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e" exitCode=0 Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.858723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerDied","Data":"277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e"} Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862569 4995 generic.go:334] "Generic (PLEG): container finished" podID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" exitCode=0 Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862602 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerDied","Data":"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16"} Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862618 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c","Type":"ContainerDied","Data":"68985bac7f5331913df7825c5a17b60e9e19f5cb1a899bdf14134c0eda5b546b"} Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862635 4995 scope.go:117] "RemoveContainer" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.862667 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.887347 4995 scope.go:117] "RemoveContainer" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.901376 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910256 4995 scope.go:117] "RemoveContainer" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.910731 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16\": container with ID starting with d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16 not found: ID does not exist" containerID="d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910794 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16"} err="failed to get container status \"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16\": rpc error: code = NotFound desc = could not find container \"d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16\": container with ID starting with d6db2610486911c87079ecb08160b7dea56d26a305780010bc7629f3c9ad0a16 not found: ID does not exist" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910825 4995 scope.go:117] "RemoveContainer" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.910956 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:30 crc kubenswrapper[4995]: E0126 23:32:30.911148 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6\": container with ID starting with 1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6 not found: ID does not exist" containerID="1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.911183 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6"} err="failed to get container status \"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6\": rpc error: code = NotFound desc = could not find container \"1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6\": container with ID starting with 1624de01f6ea114b48ce07f14278ccba93d01712cbe8340cbbf48152b6e22bf6 not found: ID does not exist" Jan 26 23:32:30 crc kubenswrapper[4995]: I0126 23:32:30.999251 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:31 crc kubenswrapper[4995]: W0126 23:32:31.001531 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod632dc482_0650_4bfc_a47c_5a573888ab9a.slice/crio-7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7 WatchSource:0}: Error finding container 7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7: Status 404 returned error can't find the container with id 7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7 Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.086311 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.872547 4995 generic.go:334] "Generic (PLEG): container finished" podID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerID="6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc" exitCode=0 Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.872707 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerDied","Data":"6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.873081 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"b24eb3bf-4d35-4163-962f-f3ad03f82019","Type":"ContainerDied","Data":"fe618fa252c29164da67ca6fb2b81b5cfcd348cb451091a77890f43ff25b2bdf"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.873096 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe618fa252c29164da67ca6fb2b81b5cfcd348cb451091a77890f43ff25b2bdf" Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.874789 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.874837 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7"} Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.881569 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992706 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992808 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992845 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.992942 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.993023 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") pod \"b24eb3bf-4d35-4163-962f-f3ad03f82019\" (UID: \"b24eb3bf-4d35-4163-962f-f3ad03f82019\") " Jan 26 23:32:31 crc kubenswrapper[4995]: I0126 23:32:31.993981 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs" (OuterVolumeSpecName: "logs") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:31.998490 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz" (OuterVolumeSpecName: "kube-api-access-8jjtz") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "kube-api-access-8jjtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.040147 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.061524 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data" (OuterVolumeSpecName: "config-data") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.085000 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b24eb3bf-4d35-4163-962f-f3ad03f82019" (UID: "b24eb3bf-4d35-4163-962f-f3ad03f82019"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095215 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b24eb3bf-4d35-4163-962f-f3ad03f82019-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095256 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095273 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jjtz\" (UniqueName: \"kubernetes.io/projected/b24eb3bf-4d35-4163-962f-f3ad03f82019-kube-api-access-8jjtz\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095286 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.095298 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b24eb3bf-4d35-4163-962f-f3ad03f82019-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.204611 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.297823 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") pod \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.298272 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") pod \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\" (UID: \"23d44b8e-50b6-4446-a75b-ca68e79ff57f\") " Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.299129 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23d44b8e-50b6-4446-a75b-ca68e79ff57f" (UID: "23d44b8e-50b6-4446-a75b-ca68e79ff57f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.302501 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z" (OuterVolumeSpecName: "kube-api-access-7tj2z") pod "23d44b8e-50b6-4446-a75b-ca68e79ff57f" (UID: "23d44b8e-50b6-4446-a75b-ca68e79ff57f"). InnerVolumeSpecName "kube-api-access-7tj2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.400442 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23d44b8e-50b6-4446-a75b-ca68e79ff57f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.400480 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tj2z\" (UniqueName: \"kubernetes.io/projected/23d44b8e-50b6-4446-a75b-ca68e79ff57f-kube-api-access-7tj2z\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.530036 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" path="/var/lib/kubelet/pods/2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c/volumes" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.885487 4995 generic.go:334] "Generic (PLEG): container finished" podID="55417497-6ca7-42c8-ba53-58da68837328" containerID="b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986" exitCode=0 Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.885570 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerDied","Data":"b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986"} Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.889242 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa"} Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.892630 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.893691 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.893850 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.894218 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcherfed2-account-delete-vsqtt" event={"ID":"23d44b8e-50b6-4446-a75b-ca68e79ff57f","Type":"ContainerDied","Data":"9d8255fbbc8921fee6dd6a4844a76364f4032c5800dd2e1becc6405460f84172"} Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.894259 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d8255fbbc8921fee6dd6a4844a76364f4032c5800dd2e1becc6405460f84172" Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.928568 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:32 crc kubenswrapper[4995]: I0126 23:32:32.935458 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010750 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010797 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010894 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010916 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010972 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.010999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") pod \"55417497-6ca7-42c8-ba53-58da68837328\" (UID: \"55417497-6ca7-42c8-ba53-58da68837328\") " Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.011659 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs" (OuterVolumeSpecName: "logs") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.015328 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w" (OuterVolumeSpecName: "kube-api-access-89r2w") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "kube-api-access-89r2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.036737 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.057968 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.060875 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data" (OuterVolumeSpecName: "config-data") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.104496 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "55417497-6ca7-42c8-ba53-58da68837328" (UID: "55417497-6ca7-42c8-ba53-58da68837328"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112618 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112663 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55417497-6ca7-42c8-ba53-58da68837328-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112686 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89r2w\" (UniqueName: \"kubernetes.io/projected/55417497-6ca7-42c8-ba53-58da68837328-kube-api-access-89r2w\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112704 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112741 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.112759 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55417497-6ca7-42c8-ba53-58da68837328-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.823957 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.832040 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-8jh4s"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.848689 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.857586 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.864229 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcherfed2-account-delete-vsqtt"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.870663 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-fed2-account-create-update-xlm64"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.903031 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"55417497-6ca7-42c8-ba53-58da68837328","Type":"ContainerDied","Data":"a04a3b402a9f38991f1c4fe01f247f03e365e4befb05601ae3567ff49fda3abf"} Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.903087 4995 scope.go:117] "RemoveContainer" containerID="b94c38476d85b3e5a8a80f53da66673c7f6707238ddfd010b9ae0d0e0e0f1986" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.903049 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.905640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48"} Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.943450 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:33 crc kubenswrapper[4995]: I0126 23:32:33.952807 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.541977 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" path="/var/lib/kubelet/pods/23d44b8e-50b6-4446-a75b-ca68e79ff57f/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.542938 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55417497-6ca7-42c8-ba53-58da68837328" path="/var/lib/kubelet/pods/55417497-6ca7-42c8-ba53-58da68837328/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.543563 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" path="/var/lib/kubelet/pods/b24eb3bf-4d35-4163-962f-f3ad03f82019/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.544576 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db49764a-224f-47ef-ad9b-016ac609fc81" path="/var/lib/kubelet/pods/db49764a-224f-47ef-ad9b-016ac609fc81/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.545034 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5c0fe20-e8a0-4e46-889c-f7484847605c" path="/var/lib/kubelet/pods/e5c0fe20-e8a0-4e46-889c-f7484847605c/volumes" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.914566 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerStarted","Data":"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c"} Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915340 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" containerID="cri-o://31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915425 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" containerID="cri-o://83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915445 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" containerID="cri-o://f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915383 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.915425 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" containerID="cri-o://70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" gracePeriod=30 Jan 26 23:32:34 crc kubenswrapper[4995]: I0126 23:32:34.984266 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.749202592 podStartE2EDuration="4.984249978s" podCreationTimestamp="2026-01-26 23:32:30 +0000 UTC" firstStartedPulling="2026-01-26 23:32:31.003178854 +0000 UTC m=+1455.167886319" lastFinishedPulling="2026-01-26 23:32:34.23822625 +0000 UTC m=+1458.402933705" observedRunningTime="2026-01-26 23:32:34.956917224 +0000 UTC m=+1459.121624689" watchObservedRunningTime="2026-01-26 23:32:34.984249978 +0000 UTC m=+1459.148957443" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.047940 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048241 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048257 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048276 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048283 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048296 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048302 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048321 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048326 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" Jan 26 23:32:35 crc kubenswrapper[4995]: E0126 23:32:35.048337 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerName="mariadb-account-delete" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048343 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerName="mariadb-account-delete" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048472 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-api" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048482 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="55417497-6ca7-42c8-ba53-58da68837328" containerName="watcher-decision-engine" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048496 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e713cb4-c623-4dbb-9d63-3d4cc65ecb6c" containerName="watcher-kuttl-api-log" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048506 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24eb3bf-4d35-4163-962f-f3ad03f82019" containerName="watcher-applier" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.048516 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="23d44b8e-50b6-4446-a75b-ca68e79ff57f" containerName="mariadb-account-delete" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.049000 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.073185 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.133771 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.134919 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.144271 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.151356 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.151412 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.170317 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.252929 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.252980 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.253020 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.253047 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.253733 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.278156 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"watcher-db-create-22m6m\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.354738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.354785 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.355622 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.372674 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.384842 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"watcher-8707-account-create-update-mgxtq\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.447832 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.852519 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928542 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" exitCode=2 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928574 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" exitCode=0 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928586 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" exitCode=0 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928617 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48"} Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928664 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa"} Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.928681 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3"} Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.929708 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-22m6m" event={"ID":"a73a610c-0780-46cb-9f01-09b48049748d","Type":"ContainerStarted","Data":"38f4dd4710cc6d70fb81ffa7e151b462478364abb1af0aabfd4fab35dfd092aa"} Jan 26 23:32:35 crc kubenswrapper[4995]: W0126 23:32:35.989920 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3461eb3_3b0d_489f_875c_bab8e4f00694.slice/crio-a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8 WatchSource:0}: Error finding container a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8: Status 404 returned error can't find the container with id a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8 Jan 26 23:32:35 crc kubenswrapper[4995]: I0126 23:32:35.991493 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.720711 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.778878 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.778963 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779048 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779085 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779219 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779244 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779355 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779375 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") pod \"632dc482-0650-4bfc-a47c-5a573888ab9a\" (UID: \"632dc482-0650-4bfc-a47c-5a573888ab9a\") " Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779552 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779745 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.779995 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.780010 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/632dc482-0650-4bfc-a47c-5a573888ab9a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.786245 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc" (OuterVolumeSpecName: "kube-api-access-dqdzc") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "kube-api-access-dqdzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.786335 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts" (OuterVolumeSpecName: "scripts") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.814552 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.824898 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.855246 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.871274 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data" (OuterVolumeSpecName: "config-data") pod "632dc482-0650-4bfc-a47c-5a573888ab9a" (UID: "632dc482-0650-4bfc-a47c-5a573888ab9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881641 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881672 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqdzc\" (UniqueName: \"kubernetes.io/projected/632dc482-0650-4bfc-a47c-5a573888ab9a-kube-api-access-dqdzc\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881684 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881692 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881701 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.881710 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632dc482-0650-4bfc-a47c-5a573888ab9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.942811 4995 generic.go:334] "Generic (PLEG): container finished" podID="a73a610c-0780-46cb-9f01-09b48049748d" containerID="b4b16b6f1cc961085f1980b33bb732c8fc0fbcf31eda7643a2f07d72636e35f6" exitCode=0 Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.942876 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-22m6m" event={"ID":"a73a610c-0780-46cb-9f01-09b48049748d","Type":"ContainerDied","Data":"b4b16b6f1cc961085f1980b33bb732c8fc0fbcf31eda7643a2f07d72636e35f6"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.947536 4995 generic.go:334] "Generic (PLEG): container finished" podID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerID="4f43eaafefb61a73772d9d42e692be3b8d70484a9a76ac96db06e9b550ed122a" exitCode=0 Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.947616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" event={"ID":"f3461eb3-3b0d-489f-875c-bab8e4f00694","Type":"ContainerDied","Data":"4f43eaafefb61a73772d9d42e692be3b8d70484a9a76ac96db06e9b550ed122a"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.947654 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" event={"ID":"f3461eb3-3b0d-489f-875c-bab8e4f00694","Type":"ContainerStarted","Data":"a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950343 4995 generic.go:334] "Generic (PLEG): container finished" podID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" exitCode=0 Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950367 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950402 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"632dc482-0650-4bfc-a47c-5a573888ab9a","Type":"ContainerDied","Data":"7fcc567c9cccf85872cb8fdb86044065f9106b92aefb3dedd3fdd40c8d6b7df7"} Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950433 4995 scope.go:117] "RemoveContainer" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.950653 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.971356 4995 scope.go:117] "RemoveContainer" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" Jan 26 23:32:36 crc kubenswrapper[4995]: I0126 23:32:36.993907 4995 scope.go:117] "RemoveContainer" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.000405 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.008701 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021304 4995 scope.go:117] "RemoveContainer" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021439 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021752 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021768 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021784 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021790 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021803 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021810 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.021821 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021827 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.021981 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-notification-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.022001 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="ceilometer-central-agent" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.022010 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="proxy-httpd" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.022018 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" containerName="sg-core" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.023337 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.024994 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.028732 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.028994 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.036231 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.059952 4995 scope.go:117] "RemoveContainer" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.060717 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c\": container with ID starting with 83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c not found: ID does not exist" containerID="83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.060748 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c"} err="failed to get container status \"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c\": rpc error: code = NotFound desc = could not find container \"83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c\": container with ID starting with 83fb249bdb29bc67b6492eb0c742ed63d9f95733dc5f63363fc580349272883c not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.060769 4995 scope.go:117] "RemoveContainer" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.062367 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48\": container with ID starting with 70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48 not found: ID does not exist" containerID="70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.062419 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48"} err="failed to get container status \"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48\": rpc error: code = NotFound desc = could not find container \"70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48\": container with ID starting with 70a51640035145a0ecc6578c205b96b71a83b34b61e7e3b8ff84ebe293f3ee48 not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.062474 4995 scope.go:117] "RemoveContainer" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.063210 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa\": container with ID starting with f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa not found: ID does not exist" containerID="f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.063362 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa"} err="failed to get container status \"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa\": rpc error: code = NotFound desc = could not find container \"f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa\": container with ID starting with f78e7397cf964dae31fda46c31d94b05cec2fefc3e13e9db7f9f001d5f035faa not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.063462 4995 scope.go:117] "RemoveContainer" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" Jan 26 23:32:37 crc kubenswrapper[4995]: E0126 23:32:37.064492 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3\": container with ID starting with 31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3 not found: ID does not exist" containerID="31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.064628 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3"} err="failed to get container status \"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3\": rpc error: code = NotFound desc = could not find container \"31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3\": container with ID starting with 31d576344931c668e184a3e003c41f9e8ed483d4d2b7cad07366a52d17aaffd3 not found: ID does not exist" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.083981 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084024 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084092 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084128 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084177 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084218 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084235 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.084254 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185446 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185567 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185614 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185655 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185688 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185716 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185789 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.185838 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.186692 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.186880 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.190798 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.190866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.195632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.195924 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.196493 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.199553 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"ceilometer-0\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.338356 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.769965 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:32:37 crc kubenswrapper[4995]: I0126 23:32:37.958276 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"de4709385c905c889d0404b4681905a6e961420de6f40ec0154a0b2ff42a1386"} Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.448685 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.453289 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508145 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") pod \"f3461eb3-3b0d-489f-875c-bab8e4f00694\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508229 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") pod \"a73a610c-0780-46cb-9f01-09b48049748d\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508265 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") pod \"a73a610c-0780-46cb-9f01-09b48049748d\" (UID: \"a73a610c-0780-46cb-9f01-09b48049748d\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508468 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") pod \"f3461eb3-3b0d-489f-875c-bab8e4f00694\" (UID: \"f3461eb3-3b0d-489f-875c-bab8e4f00694\") " Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.508970 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f3461eb3-3b0d-489f-875c-bab8e4f00694" (UID: "f3461eb3-3b0d-489f-875c-bab8e4f00694"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.509740 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a73a610c-0780-46cb-9f01-09b48049748d" (UID: "a73a610c-0780-46cb-9f01-09b48049748d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.513695 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h" (OuterVolumeSpecName: "kube-api-access-vzs4h") pod "a73a610c-0780-46cb-9f01-09b48049748d" (UID: "a73a610c-0780-46cb-9f01-09b48049748d"). InnerVolumeSpecName "kube-api-access-vzs4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.519386 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c" (OuterVolumeSpecName: "kube-api-access-2mp4c") pod "f3461eb3-3b0d-489f-875c-bab8e4f00694" (UID: "f3461eb3-3b0d-489f-875c-bab8e4f00694"). InnerVolumeSpecName "kube-api-access-2mp4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.529902 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632dc482-0650-4bfc-a47c-5a573888ab9a" path="/var/lib/kubelet/pods/632dc482-0650-4bfc-a47c-5a573888ab9a/volumes" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610211 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mp4c\" (UniqueName: \"kubernetes.io/projected/f3461eb3-3b0d-489f-875c-bab8e4f00694-kube-api-access-2mp4c\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610251 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f3461eb3-3b0d-489f-875c-bab8e4f00694-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610267 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzs4h\" (UniqueName: \"kubernetes.io/projected/a73a610c-0780-46cb-9f01-09b48049748d-kube-api-access-vzs4h\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.610281 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a73a610c-0780-46cb-9f01-09b48049748d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.971853 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-22m6m" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.971936 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-22m6m" event={"ID":"a73a610c-0780-46cb-9f01-09b48049748d","Type":"ContainerDied","Data":"38f4dd4710cc6d70fb81ffa7e151b462478364abb1af0aabfd4fab35dfd092aa"} Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.972900 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38f4dd4710cc6d70fb81ffa7e151b462478364abb1af0aabfd4fab35dfd092aa" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.974091 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" event={"ID":"f3461eb3-3b0d-489f-875c-bab8e4f00694","Type":"ContainerDied","Data":"a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8"} Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.974139 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1f1ed5e9b068ea888a42a72127ee0a7ad65e471cd55e125e4120fc7400644d8" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.974197 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-8707-account-create-update-mgxtq" Jan 26 23:32:38 crc kubenswrapper[4995]: I0126 23:32:38.978867 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd"} Jan 26 23:32:39 crc kubenswrapper[4995]: I0126 23:32:39.993571 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8"} Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.612552 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:32:40 crc kubenswrapper[4995]: E0126 23:32:40.612950 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerName="mariadb-account-create-update" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.612973 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerName="mariadb-account-create-update" Jan 26 23:32:40 crc kubenswrapper[4995]: E0126 23:32:40.612988 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a73a610c-0780-46cb-9f01-09b48049748d" containerName="mariadb-database-create" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.612997 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a73a610c-0780-46cb-9f01-09b48049748d" containerName="mariadb-database-create" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.613193 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" containerName="mariadb-account-create-update" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.613214 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a73a610c-0780-46cb-9f01-09b48049748d" containerName="mariadb-database-create" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.613714 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.618341 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-bhj8k" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.618658 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.625409 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743314 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743639 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743719 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.743738 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845491 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845545 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845614 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.845636 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.850136 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.850647 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.850957 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.866557 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"watcher-kuttl-db-sync-gk5qf\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.893530 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.893751 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.893892 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.894579 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.894713 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570" gracePeriod=600 Jan 26 23:32:40 crc kubenswrapper[4995]: I0126 23:32:40.929256 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:41 crc kubenswrapper[4995]: I0126 23:32:41.026077 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039"} Jan 26 23:32:41 crc kubenswrapper[4995]: I0126 23:32:41.496188 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:32:41 crc kubenswrapper[4995]: W0126 23:32:41.518046 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a50a8e0_765f_4f78_8204_78064fe55510.slice/crio-67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5 WatchSource:0}: Error finding container 67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5: Status 404 returned error can't find the container with id 67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5 Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036530 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570" exitCode=0 Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036594 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036828 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.036849 4995 scope.go:117] "RemoveContainer" containerID="45bd20296ff6d5aa0cde32c140dff26a4c42cad2ac9cddbd09b95d31149b3d69" Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.038563 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerStarted","Data":"fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.038604 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerStarted","Data":"67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.040634 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerStarted","Data":"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3"} Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.040822 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.080185 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" podStartSLOduration=2.08017094 podStartE2EDuration="2.08017094s" podCreationTimestamp="2026-01-26 23:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:42.079728089 +0000 UTC m=+1466.244435554" watchObservedRunningTime="2026-01-26 23:32:42.08017094 +0000 UTC m=+1466.244878405" Jan 26 23:32:42 crc kubenswrapper[4995]: I0126 23:32:42.104085 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.629051124 podStartE2EDuration="5.104072398s" podCreationTimestamp="2026-01-26 23:32:37 +0000 UTC" firstStartedPulling="2026-01-26 23:32:37.774144259 +0000 UTC m=+1461.938851724" lastFinishedPulling="2026-01-26 23:32:41.249165533 +0000 UTC m=+1465.413872998" observedRunningTime="2026-01-26 23:32:42.097417622 +0000 UTC m=+1466.262125077" watchObservedRunningTime="2026-01-26 23:32:42.104072398 +0000 UTC m=+1466.268779853" Jan 26 23:32:44 crc kubenswrapper[4995]: I0126 23:32:44.069702 4995 generic.go:334] "Generic (PLEG): container finished" podID="1a50a8e0-765f-4f78-8204-78064fe55510" containerID="fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8" exitCode=0 Jan 26 23:32:44 crc kubenswrapper[4995]: I0126 23:32:44.069800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerDied","Data":"fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8"} Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.575069 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622704 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622829 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622898 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.622934 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") pod \"1a50a8e0-765f-4f78-8204-78064fe55510\" (UID: \"1a50a8e0-765f-4f78-8204-78064fe55510\") " Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.628321 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.628688 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw" (OuterVolumeSpecName: "kube-api-access-r8vmw") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "kube-api-access-r8vmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.650421 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.667170 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data" (OuterVolumeSpecName: "config-data") pod "1a50a8e0-765f-4f78-8204-78064fe55510" (UID: "1a50a8e0-765f-4f78-8204-78064fe55510"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.724971 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.725196 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.725264 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1a50a8e0-765f-4f78-8204-78064fe55510-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:45 crc kubenswrapper[4995]: I0126 23:32:45.725320 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8vmw\" (UniqueName: \"kubernetes.io/projected/1a50a8e0-765f-4f78-8204-78064fe55510-kube-api-access-r8vmw\") on node \"crc\" DevicePath \"\"" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.123593 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" event={"ID":"1a50a8e0-765f-4f78-8204-78064fe55510","Type":"ContainerDied","Data":"67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5"} Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.123965 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67953a5423b03f2d52808ecb0921392ec6b46b5cedc8a830bc22ced69d61f8a5" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.123810 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.859039 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: E0126 23:32:46.860245 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" containerName="watcher-kuttl-db-sync" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.860342 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" containerName="watcher-kuttl-db-sync" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.860594 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" containerName="watcher-kuttl-db-sync" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.861347 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.863224 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-bhj8k" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.870689 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.875475 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.920549 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.921966 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.924036 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.931828 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.932878 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.934397 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945630 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945666 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945710 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945728 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.945963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:46 crc kubenswrapper[4995]: I0126 23:32:46.950446 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.000584 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047921 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047957 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.047979 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048011 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048042 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048087 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048178 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048221 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048243 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048348 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048425 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048481 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048526 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048544 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048670 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048711 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.048813 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.052587 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.052618 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.053052 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.071654 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"watcher-kuttl-applier-0\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150526 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150801 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150823 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150848 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150880 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150896 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.150938 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151319 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151357 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151396 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151434 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.151917 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.154749 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.154906 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.154998 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.155161 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.155228 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.156275 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.156632 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.168409 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.170321 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.171443 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"watcher-kuttl-api-0\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.180567 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.237248 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.249190 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.634692 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: W0126 23:32:47.639834 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb3c5f3_cb09_4f84_bcf6_79b0bebf2602.slice/crio-4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656 WatchSource:0}: Error finding container 4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656: Status 404 returned error can't find the container with id 4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656 Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.738675 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:32:47 crc kubenswrapper[4995]: W0126 23:32:47.755458 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb459a34f_abd7_4350_8b91_c57b5124cbcf.slice/crio-86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120 WatchSource:0}: Error finding container 86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120: Status 404 returned error can't find the container with id 86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120 Jan 26 23:32:47 crc kubenswrapper[4995]: W0126 23:32:47.857660 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32336662_bff8_4aca_afa4_2039d421a770.slice/crio-9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6 WatchSource:0}: Error finding container 9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6: Status 404 returned error can't find the container with id 9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6 Jan 26 23:32:47 crc kubenswrapper[4995]: I0126 23:32:47.868178 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.140584 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerStarted","Data":"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.140628 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerStarted","Data":"4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.142794 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerStarted","Data":"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.142849 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerStarted","Data":"9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.144554 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerStarted","Data":"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.144589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerStarted","Data":"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.144599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerStarted","Data":"86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120"} Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.145485 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.146519 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": dial tcp 10.217.0.180:9322: connect: connection refused" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.157346 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.157327684 podStartE2EDuration="2.157327684s" podCreationTimestamp="2026-01-26 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:48.1551827 +0000 UTC m=+1472.319890175" watchObservedRunningTime="2026-01-26 23:32:48.157327684 +0000 UTC m=+1472.322035149" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.175166 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.17514877 podStartE2EDuration="2.17514877s" podCreationTimestamp="2026-01-26 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:48.173179771 +0000 UTC m=+1472.337887236" watchObservedRunningTime="2026-01-26 23:32:48.17514877 +0000 UTC m=+1472.339856235" Jan 26 23:32:48 crc kubenswrapper[4995]: I0126 23:32:48.194361 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.19433654 podStartE2EDuration="2.19433654s" podCreationTimestamp="2026-01-26 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:32:48.188390642 +0000 UTC m=+1472.353098107" watchObservedRunningTime="2026-01-26 23:32:48.19433654 +0000 UTC m=+1472.359044015" Jan 26 23:32:51 crc kubenswrapper[4995]: I0126 23:32:51.230564 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:52 crc kubenswrapper[4995]: I0126 23:32:52.180846 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:52 crc kubenswrapper[4995]: I0126 23:32:52.239199 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.181142 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.212624 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.238275 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.243338 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.249380 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.288835 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:32:57 crc kubenswrapper[4995]: I0126 23:32:57.314370 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:58 crc kubenswrapper[4995]: I0126 23:32:58.235688 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:32:58 crc kubenswrapper[4995]: I0126 23:32:58.317538 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:32:58 crc kubenswrapper[4995]: I0126 23:32:58.392978 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.395913 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.396713 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" containerID="cri-o://6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.397652 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" containerID="cri-o://b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.397748 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" containerID="cri-o://fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.397814 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" containerID="cri-o://25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" gracePeriod=30 Jan 26 23:33:00 crc kubenswrapper[4995]: I0126 23:33:00.409882 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.177:3000/\": EOF" Jan 26 23:33:00 crc kubenswrapper[4995]: E0126 23:33:00.529049 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d5b5d8b_4be0_469b_950f_0dbee7966330.slice/crio-conmon-fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039.scope\": RecentStats: unable to find data in memory cache]" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.267676 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" exitCode=0 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268302 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" exitCode=2 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268376 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" exitCode=0 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.267746 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3"} Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039"} Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.268610 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd"} Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.831982 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.840449 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gk5qf"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.874206 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.875383 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.889854 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.890585 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" containerID="cri-o://c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" gracePeriod=30 Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.896855 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.940794 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.940958 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.957724 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:01 crc kubenswrapper[4995]: I0126 23:33:01.957998 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" containerID="cri-o://d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" gracePeriod=30 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.000553 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.000758 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" containerID="cri-o://5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" gracePeriod=30 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.000925 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" containerID="cri-o://d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" gracePeriod=30 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.042801 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.042885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.043797 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.077890 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"watcher8707-account-delete-8wgxs\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.191552 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.192677 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.193270 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.196274 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:02 crc kubenswrapper[4995]: E0126 23:33:02.196361 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.279761 4995 generic.go:334] "Generic (PLEG): container finished" podID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" exitCode=143 Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.279805 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerDied","Data":"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a"} Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.520717 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": read tcp 10.217.0.2:56578->10.217.0.180:9322: read: connection reset by peer" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.520723 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.180:9322/\": read tcp 10.217.0.2:56594->10.217.0.180:9322: read: connection reset by peer" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.525247 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a50a8e0-765f-4f78-8204-78064fe55510" path="/var/lib/kubelet/pods/1a50a8e0-765f-4f78-8204-78064fe55510/volumes" Jan 26 23:33:02 crc kubenswrapper[4995]: I0126 23:33:02.695702 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:02 crc kubenswrapper[4995]: W0126 23:33:02.715028 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eead2da_27a3_4ce5_9098_ac9564a6b27a.slice/crio-6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057 WatchSource:0}: Error finding container 6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057: Status 404 returned error can't find the container with id 6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057 Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.001528 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060833 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060933 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060956 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.060982 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.061027 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.061111 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") pod \"b459a34f-abd7-4350-8b91-c57b5124cbcf\" (UID: \"b459a34f-abd7-4350-8b91-c57b5124cbcf\") " Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.065392 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs" (OuterVolumeSpecName: "logs") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.083645 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82" (OuterVolumeSpecName: "kube-api-access-7xp82") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "kube-api-access-7xp82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.100820 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.102792 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.136229 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data" (OuterVolumeSpecName: "config-data") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163660 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b459a34f-abd7-4350-8b91-c57b5124cbcf-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163695 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163708 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xp82\" (UniqueName: \"kubernetes.io/projected/b459a34f-abd7-4350-8b91-c57b5124cbcf-kube-api-access-7xp82\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163716 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.163983 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.224237 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "b459a34f-abd7-4350-8b91-c57b5124cbcf" (UID: "b459a34f-abd7-4350-8b91-c57b5124cbcf"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.265746 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/b459a34f-abd7-4350-8b91-c57b5124cbcf-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.288547 4995 generic.go:334] "Generic (PLEG): container finished" podID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" exitCode=0 Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.288603 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.288640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerDied","Data":"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.289349 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"b459a34f-abd7-4350-8b91-c57b5124cbcf","Type":"ContainerDied","Data":"86bddedc9072a6ed3ed3e4d2162a5c0bb2352a12c7cc9f1e8973aedd59c14120"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.289411 4995 scope.go:117] "RemoveContainer" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.290215 4995 generic.go:334] "Generic (PLEG): container finished" podID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerID="7e3ee0bb83f474f59b73fb0e9420f6ea26d6576fd1f9c21251e039a52f0471bc" exitCode=0 Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.290250 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" event={"ID":"0eead2da-27a3-4ce5-9098-ac9564a6b27a","Type":"ContainerDied","Data":"7e3ee0bb83f474f59b73fb0e9420f6ea26d6576fd1f9c21251e039a52f0471bc"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.290274 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" event={"ID":"0eead2da-27a3-4ce5-9098-ac9564a6b27a","Type":"ContainerStarted","Data":"6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057"} Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.335580 4995 scope.go:117] "RemoveContainer" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.342899 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.354560 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.363537 4995 scope.go:117] "RemoveContainer" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" Jan 26 23:33:03 crc kubenswrapper[4995]: E0126 23:33:03.364770 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c\": container with ID starting with d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c not found: ID does not exist" containerID="d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.364802 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c"} err="failed to get container status \"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c\": rpc error: code = NotFound desc = could not find container \"d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c\": container with ID starting with d555818036db6e752618431f3a6d8a24dd0c0c5684b99195eab7d9aa428d422c not found: ID does not exist" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.364822 4995 scope.go:117] "RemoveContainer" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" Jan 26 23:33:03 crc kubenswrapper[4995]: E0126 23:33:03.365189 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a\": container with ID starting with 5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a not found: ID does not exist" containerID="5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a" Jan 26 23:33:03 crc kubenswrapper[4995]: I0126 23:33:03.365248 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a"} err="failed to get container status \"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a\": rpc error: code = NotFound desc = could not find container \"5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a\": container with ID starting with 5151440dd67eae1c3b74d7f864d13d2967cb2c326a3d7a9097f75f76d0433a1a not found: ID does not exist" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.138546 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.279990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280082 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280203 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280224 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") pod \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\" (UID: \"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.280593 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs" (OuterVolumeSpecName: "logs") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.286289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2" (OuterVolumeSpecName: "kube-api-access-8k2d2") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "kube-api-access-8k2d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301545 4995 generic.go:334] "Generic (PLEG): container finished" podID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" exitCode=0 Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301616 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerDied","Data":"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565"} Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301637 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301643 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602","Type":"ContainerDied","Data":"4ab072bfb95a7246f80622b096ad1314fc5881d224dbd69c2e091a13f6d01656"} Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.301654 4995 scope.go:117] "RemoveContainer" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.335316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.354209 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data" (OuterVolumeSpecName: "config-data") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.357190 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" (UID: "dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382058 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382095 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382118 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k2d2\" (UniqueName: \"kubernetes.io/projected/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-kube-api-access-8k2d2\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382130 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.382140 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.409076 4995 scope.go:117] "RemoveContainer" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" Jan 26 23:33:04 crc kubenswrapper[4995]: E0126 23:33:04.409554 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565\": container with ID starting with c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565 not found: ID does not exist" containerID="c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.409599 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565"} err="failed to get container status \"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565\": rpc error: code = NotFound desc = could not find container \"c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565\": container with ID starting with c58ef55800da32f644175f50139e2be57b0017eea0d5f68c7113c074b91b0565 not found: ID does not exist" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.525829 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" path="/var/lib/kubelet/pods/b459a34f-abd7-4350-8b91-c57b5124cbcf/volumes" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.628059 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.630094 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.642157 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.686594 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") pod \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.686747 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") pod \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\" (UID: \"0eead2da-27a3-4ce5-9098-ac9564a6b27a\") " Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.687573 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eead2da-27a3-4ce5-9098-ac9564a6b27a" (UID: "0eead2da-27a3-4ce5-9098-ac9564a6b27a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.691044 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs" (OuterVolumeSpecName: "kube-api-access-88rvs") pod "0eead2da-27a3-4ce5-9098-ac9564a6b27a" (UID: "0eead2da-27a3-4ce5-9098-ac9564a6b27a"). InnerVolumeSpecName "kube-api-access-88rvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.788688 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88rvs\" (UniqueName: \"kubernetes.io/projected/0eead2da-27a3-4ce5-9098-ac9564a6b27a-kube-api-access-88rvs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:04 crc kubenswrapper[4995]: I0126 23:33:04.788718 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eead2da-27a3-4ce5-9098-ac9564a6b27a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.314424 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.314429 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher8707-account-delete-8wgxs" event={"ID":"0eead2da-27a3-4ce5-9098-ac9564a6b27a","Type":"ContainerDied","Data":"6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057"} Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.315171 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6937b7c319f771f017a0cb726fcee564c30aaae8c5871dcc04823178dd941057" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.789309 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909162 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909477 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909530 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909574 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909598 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.909616 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") pod \"32336662-bff8-4aca-afa4-2039d421a770\" (UID: \"32336662-bff8-4aca-afa4-2039d421a770\") " Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.910296 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs" (OuterVolumeSpecName: "logs") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.910549 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32336662-bff8-4aca-afa4-2039d421a770-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.916715 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf" (OuterVolumeSpecName: "kube-api-access-gh9qf") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "kube-api-access-gh9qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.933842 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.941259 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.946387 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data" (OuterVolumeSpecName: "config-data") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.979402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "32336662-bff8-4aca-afa4-2039d421a770" (UID: "32336662-bff8-4aca-afa4-2039d421a770"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:05 crc kubenswrapper[4995]: I0126 23:33:05.982672 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040235 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040265 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040276 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040284 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/32336662-bff8-4aca-afa4-2039d421a770-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.040293 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh9qf\" (UniqueName: \"kubernetes.io/projected/32336662-bff8-4aca-afa4-2039d421a770-kube-api-access-gh9qf\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141354 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141443 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141487 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141518 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141620 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141649 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141678 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.141705 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") pod \"0d5b5d8b-4be0-469b-950f-0dbee7966330\" (UID: \"0d5b5d8b-4be0-469b-950f-0dbee7966330\") " Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.142275 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.142395 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.146217 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw" (OuterVolumeSpecName: "kube-api-access-lwthw") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "kube-api-access-lwthw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.146698 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts" (OuterVolumeSpecName: "scripts") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.165898 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.217344 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.228428 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243304 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243485 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwthw\" (UniqueName: \"kubernetes.io/projected/0d5b5d8b-4be0-469b-950f-0dbee7966330-kube-api-access-lwthw\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243541 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243598 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243647 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243740 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.243805 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0d5b5d8b-4be0-469b-950f-0dbee7966330-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.270087 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data" (OuterVolumeSpecName: "config-data") pod "0d5b5d8b-4be0-469b-950f-0dbee7966330" (UID: "0d5b5d8b-4be0-469b-950f-0dbee7966330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325450 4995 generic.go:334] "Generic (PLEG): container finished" podID="32336662-bff8-4aca-afa4-2039d421a770" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" exitCode=0 Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325515 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerDied","Data":"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325535 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325558 4995 scope.go:117] "RemoveContainer" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.325546 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"32336662-bff8-4aca-afa4-2039d421a770","Type":"ContainerDied","Data":"9f1cd4619ee90776d56e36685fea9f144f4d5c6f3e290c4ee750414a618009a6"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332317 4995 generic.go:334] "Generic (PLEG): container finished" podID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" exitCode=0 Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332378 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332439 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0d5b5d8b-4be0-469b-950f-0dbee7966330","Type":"ContainerDied","Data":"de4709385c905c889d0404b4681905a6e961420de6f40ec0154a0b2ff42a1386"} Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.332533 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.345318 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d5b5d8b-4be0-469b-950f-0dbee7966330-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.358431 4995 scope.go:117] "RemoveContainer" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.359050 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540\": container with ID starting with d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540 not found: ID does not exist" containerID="d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.359091 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540"} err="failed to get container status \"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540\": rpc error: code = NotFound desc = could not find container \"d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540\": container with ID starting with d843a2c3d0be41030deab1de87498c662c6ee9302ff8e994ec6e0f33da88e540 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.359132 4995 scope.go:117] "RemoveContainer" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.390832 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.398180 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.404167 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.408517 4995 scope.go:117] "RemoveContainer" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.411996 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.421579 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422571 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.422652 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422736 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.422791 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422848 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.422899 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.422970 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423023 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423090 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423166 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423227 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423282 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423342 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423397 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423500 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerName="mariadb-account-delete" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423560 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerName="mariadb-account-delete" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.423621 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423674 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423923 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-api" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.423997 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="sg-core" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424064 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="32336662-bff8-4aca-afa4-2039d421a770" containerName="watcher-decision-engine" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424151 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" containerName="watcher-applier" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424210 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-central-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424259 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="ceilometer-notification-agent" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424311 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" containerName="proxy-httpd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424355 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b459a34f-abd7-4350-8b91-c57b5124cbcf" containerName="watcher-kuttl-api-log" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.424401 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" containerName="mariadb-account-delete" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.426418 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.429472 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.430044 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.430908 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.440508 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.470210 4995 scope.go:117] "RemoveContainer" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.496400 4995 scope.go:117] "RemoveContainer" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.515941 4995 scope.go:117] "RemoveContainer" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.516414 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3\": container with ID starting with b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3 not found: ID does not exist" containerID="b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516444 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3"} err="failed to get container status \"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3\": rpc error: code = NotFound desc = could not find container \"b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3\": container with ID starting with b3ffb8c55fd43a0d161dbbd88ea5a8e57c972f30ef0b50f5c19bfc41f45dd0f3 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516466 4995 scope.go:117] "RemoveContainer" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.516763 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039\": container with ID starting with fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039 not found: ID does not exist" containerID="fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516812 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039"} err="failed to get container status \"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039\": rpc error: code = NotFound desc = could not find container \"fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039\": container with ID starting with fc673c22f554a87a38abf704977a553e3d3ab83f6686c6181a7cf0a6f0ecc039 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.516842 4995 scope.go:117] "RemoveContainer" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.518504 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8\": container with ID starting with 25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8 not found: ID does not exist" containerID="25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.518528 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8"} err="failed to get container status \"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8\": rpc error: code = NotFound desc = could not find container \"25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8\": container with ID starting with 25f4fb16cf7b939c887b9171dd5d52f74324d0ea0d9a763d0a507644dabfb1d8 not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.518542 4995 scope.go:117] "RemoveContainer" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" Jan 26 23:33:06 crc kubenswrapper[4995]: E0126 23:33:06.518879 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd\": container with ID starting with 6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd not found: ID does not exist" containerID="6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.518908 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd"} err="failed to get container status \"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd\": rpc error: code = NotFound desc = could not find container \"6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd\": container with ID starting with 6ec753946f40bdabc721acbcecd165e60cdc3ee423fd440a8ec8c1a433d458dd not found: ID does not exist" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.527966 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d5b5d8b-4be0-469b-950f-0dbee7966330" path="/var/lib/kubelet/pods/0d5b5d8b-4be0-469b-950f-0dbee7966330/volumes" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.528850 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32336662-bff8-4aca-afa4-2039d421a770" path="/var/lib/kubelet/pods/32336662-bff8-4aca-afa4-2039d421a770/volumes" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.529516 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602" path="/var/lib/kubelet/pods/dcb3c5f3-cb09-4f84-bcf6-79b0bebf2602/volumes" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.548776 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.548829 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549028 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549111 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549223 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549379 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.549423 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650776 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650845 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650881 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650935 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.650959 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.651074 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.651725 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.651998 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.652055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.652091 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656321 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656331 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656706 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.656805 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.657164 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.674355 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"ceilometer-0\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.749614 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.913655 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.926021 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-22m6m"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.953089 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.965198 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.969144 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher8707-account-delete-8wgxs"] Jan 26 23:33:06 crc kubenswrapper[4995]: I0126 23:33:06.973320 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-8707-account-create-update-mgxtq"] Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.142748 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:33:07 crc kubenswrapper[4995]: W0126 23:33:07.156807 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3ce857e_376e_4fd3_b74a_17165502ac6d.slice/crio-58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc WatchSource:0}: Error finding container 58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc: Status 404 returned error can't find the container with id 58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.341684 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc"} Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.913565 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.915639 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:07 crc kubenswrapper[4995]: I0126 23:33:07.928050 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.020316 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.021242 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.023153 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.029670 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.081682 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.081841 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183784 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183826 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.183874 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.184551 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.204029 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"watcher-db-create-65c6n\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.232992 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.287418 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.287719 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.288996 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.311484 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"watcher-594d-account-create-update-54znd\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.340523 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.364308 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468"} Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.542510 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eead2da-27a3-4ce5-9098-ac9564a6b27a" path="/var/lib/kubelet/pods/0eead2da-27a3-4ce5-9098-ac9564a6b27a/volumes" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.543173 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a73a610c-0780-46cb-9f01-09b48049748d" path="/var/lib/kubelet/pods/a73a610c-0780-46cb-9f01-09b48049748d/volumes" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.547403 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3461eb3-3b0d-489f-875c-bab8e4f00694" path="/var/lib/kubelet/pods/f3461eb3-3b0d-489f-875c-bab8e4f00694/volumes" Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.792864 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:08 crc kubenswrapper[4995]: W0126 23:33:08.795537 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0eb78169_d22d_4b1a_a51b_ad25391e10e9.slice/crio-00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd WatchSource:0}: Error finding container 00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd: Status 404 returned error can't find the container with id 00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd Jan 26 23:33:08 crc kubenswrapper[4995]: I0126 23:33:08.932448 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:08 crc kubenswrapper[4995]: W0126 23:33:08.934365 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35e92c48_e139_4a90_8601_1bd4d2937700.slice/crio-9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68 WatchSource:0}: Error finding container 9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68: Status 404 returned error can't find the container with id 9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68 Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.372314 4995 generic.go:334] "Generic (PLEG): container finished" podID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerID="1866d568d45be33fe5efec6245bd56a7ca5c85d09dddb97e98e3df586623483f" exitCode=0 Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.372364 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-65c6n" event={"ID":"0eb78169-d22d-4b1a-a51b-ad25391e10e9","Type":"ContainerDied","Data":"1866d568d45be33fe5efec6245bd56a7ca5c85d09dddb97e98e3df586623483f"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.372732 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-65c6n" event={"ID":"0eb78169-d22d-4b1a-a51b-ad25391e10e9","Type":"ContainerStarted","Data":"00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.374682 4995 generic.go:334] "Generic (PLEG): container finished" podID="35e92c48-e139-4a90-8601-1bd4d2937700" containerID="cf3bdba0bcbd9d81e57b55b762961560e4562c68a0aaacec99cefb4e736c2028" exitCode=0 Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.374755 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" event={"ID":"35e92c48-e139-4a90-8601-1bd4d2937700","Type":"ContainerDied","Data":"cf3bdba0bcbd9d81e57b55b762961560e4562c68a0aaacec99cefb4e736c2028"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.374780 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" event={"ID":"35e92c48-e139-4a90-8601-1bd4d2937700","Type":"ContainerStarted","Data":"9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.377678 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427"} Jan 26 23:33:09 crc kubenswrapper[4995]: I0126 23:33:09.377711 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885"} Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.930279 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.940057 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976328 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") pod \"35e92c48-e139-4a90-8601-1bd4d2937700\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976614 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") pod \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976747 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") pod \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\" (UID: \"0eb78169-d22d-4b1a-a51b-ad25391e10e9\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.976846 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") pod \"35e92c48-e139-4a90-8601-1bd4d2937700\" (UID: \"35e92c48-e139-4a90-8601-1bd4d2937700\") " Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.977741 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "35e92c48-e139-4a90-8601-1bd4d2937700" (UID: "35e92c48-e139-4a90-8601-1bd4d2937700"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.981288 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0eb78169-d22d-4b1a-a51b-ad25391e10e9" (UID: "0eb78169-d22d-4b1a-a51b-ad25391e10e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.981708 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l" (OuterVolumeSpecName: "kube-api-access-vbv9l") pod "35e92c48-e139-4a90-8601-1bd4d2937700" (UID: "35e92c48-e139-4a90-8601-1bd4d2937700"). InnerVolumeSpecName "kube-api-access-vbv9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:10 crc kubenswrapper[4995]: I0126 23:33:10.981769 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl" (OuterVolumeSpecName: "kube-api-access-76hxl") pod "0eb78169-d22d-4b1a-a51b-ad25391e10e9" (UID: "0eb78169-d22d-4b1a-a51b-ad25391e10e9"). InnerVolumeSpecName "kube-api-access-76hxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078539 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76hxl\" (UniqueName: \"kubernetes.io/projected/0eb78169-d22d-4b1a-a51b-ad25391e10e9-kube-api-access-76hxl\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078582 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0eb78169-d22d-4b1a-a51b-ad25391e10e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078592 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/35e92c48-e139-4a90-8601-1bd4d2937700-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.078604 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbv9l\" (UniqueName: \"kubernetes.io/projected/35e92c48-e139-4a90-8601-1bd4d2937700-kube-api-access-vbv9l\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.394018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" event={"ID":"35e92c48-e139-4a90-8601-1bd4d2937700","Type":"ContainerDied","Data":"9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68"} Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.394023 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-594d-account-create-update-54znd" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.394060 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dc1cc05dd16cab1ea05b901d60885348d2ac583025c19c754c5a6da05fffc68" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.397021 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerStarted","Data":"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca"} Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.397177 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.398671 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-65c6n" event={"ID":"0eb78169-d22d-4b1a-a51b-ad25391e10e9","Type":"ContainerDied","Data":"00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd"} Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.398718 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00a58f46f313c7a27615c7d285cd08ef75780713c74ce48aec7c28ae7f63a2dd" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.398792 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-65c6n" Jan 26 23:33:11 crc kubenswrapper[4995]: I0126 23:33:11.424267 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.114734999 podStartE2EDuration="5.424236579s" podCreationTimestamp="2026-01-26 23:33:06 +0000 UTC" firstStartedPulling="2026-01-26 23:33:07.158740275 +0000 UTC m=+1491.323447740" lastFinishedPulling="2026-01-26 23:33:10.468241855 +0000 UTC m=+1494.632949320" observedRunningTime="2026-01-26 23:33:11.417772368 +0000 UTC m=+1495.582479823" watchObservedRunningTime="2026-01-26 23:33:11.424236579 +0000 UTC m=+1495.588944044" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.238532 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:13 crc kubenswrapper[4995]: E0126 23:33:13.239066 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerName="mariadb-database-create" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239077 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerName="mariadb-database-create" Jan 26 23:33:13 crc kubenswrapper[4995]: E0126 23:33:13.239093 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" containerName="mariadb-account-create-update" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239117 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" containerName="mariadb-account-create-update" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239258 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" containerName="mariadb-account-create-update" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239268 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" containerName="mariadb-database-create" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.239778 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.241512 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7pps7" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.241785 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.254866 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319239 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319307 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319450 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.319569 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421396 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421502 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421575 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.421640 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.426713 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.428576 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.436913 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.445306 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"watcher-kuttl-db-sync-gf67f\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:13 crc kubenswrapper[4995]: I0126 23:33:13.567507 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.057971 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.421501 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerStarted","Data":"99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438"} Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.421540 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerStarted","Data":"d747f3d56270407e9ad3ec6a1c9d987864ed2e6cea6f518a69edbdd0f6c50044"} Jan 26 23:33:14 crc kubenswrapper[4995]: I0126 23:33:14.444306 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" podStartSLOduration=1.444289564 podStartE2EDuration="1.444289564s" podCreationTimestamp="2026-01-26 23:33:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:14.440593421 +0000 UTC m=+1498.605300876" watchObservedRunningTime="2026-01-26 23:33:14.444289564 +0000 UTC m=+1498.608997029" Jan 26 23:33:16 crc kubenswrapper[4995]: I0126 23:33:16.442399 4995 generic.go:334] "Generic (PLEG): container finished" podID="64055a76-6d73-45e6-8c44-424f42362b20" containerID="99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438" exitCode=0 Jan 26 23:33:16 crc kubenswrapper[4995]: I0126 23:33:16.442659 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerDied","Data":"99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438"} Jan 26 23:33:17 crc kubenswrapper[4995]: I0126 23:33:17.934597 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038516 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038611 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038663 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.038701 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") pod \"64055a76-6d73-45e6-8c44-424f42362b20\" (UID: \"64055a76-6d73-45e6-8c44-424f42362b20\") " Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.044218 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw" (OuterVolumeSpecName: "kube-api-access-zgblw") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "kube-api-access-zgblw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.044410 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.059549 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.094229 4995 scope.go:117] "RemoveContainer" containerID="314d9c39155357f797a09c4f9a573a846dd0baf7a5fe546731579ee9d200fd82" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.112202 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data" (OuterVolumeSpecName: "config-data") pod "64055a76-6d73-45e6-8c44-424f42362b20" (UID: "64055a76-6d73-45e6-8c44-424f42362b20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140841 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140899 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140920 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgblw\" (UniqueName: \"kubernetes.io/projected/64055a76-6d73-45e6-8c44-424f42362b20-kube-api-access-zgblw\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.140940 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/64055a76-6d73-45e6-8c44-424f42362b20-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.194614 4995 scope.go:117] "RemoveContainer" containerID="7b52cd788a34a33152655fad206082ca4ae4aa2dde98a41e59cc6dacf5cc9c02" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.461970 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" event={"ID":"64055a76-6d73-45e6-8c44-424f42362b20","Type":"ContainerDied","Data":"d747f3d56270407e9ad3ec6a1c9d987864ed2e6cea6f518a69edbdd0f6c50044"} Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.462007 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d747f3d56270407e9ad3ec6a1c9d987864ed2e6cea6f518a69edbdd0f6c50044" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.462033 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-gf67f" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.745066 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: E0126 23:33:18.745785 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64055a76-6d73-45e6-8c44-424f42362b20" containerName="watcher-kuttl-db-sync" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.745807 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="64055a76-6d73-45e6-8c44-424f42362b20" containerName="watcher-kuttl-db-sync" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.745994 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="64055a76-6d73-45e6-8c44-424f42362b20" containerName="watcher-kuttl-db-sync" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.746987 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.748924 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749051 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749166 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749272 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749314 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.749436 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.751441 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-7pps7" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.753656 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.758554 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.824667 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.825654 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.828807 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.836833 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850201 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850418 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850511 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850590 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850727 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850865 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.850964 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.851040 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.851149 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.851316 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.855740 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.856169 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.856682 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.859404 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.862198 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.882241 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.883894 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.885663 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"watcher-kuttl-api-0\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.887490 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.892643 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952083 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952191 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952237 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952284 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952305 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952441 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952519 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952538 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952568 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952705 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.952708 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.955738 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.956198 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.956304 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:18 crc kubenswrapper[4995]: I0126 23:33:18.967062 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"watcher-kuttl-applier-0\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053684 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053726 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053769 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053818 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053864 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.053885 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.054634 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.056736 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.057607 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.057890 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.058283 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.068356 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.074605 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.153748 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.249142 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.514301 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.524897 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:19 crc kubenswrapper[4995]: W0126 23:33:19.541435 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2153945e_4846_45d3_8e7c_dfaff880bbc8.slice/crio-381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74 WatchSource:0}: Error finding container 381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74: Status 404 returned error can't find the container with id 381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74 Jan 26 23:33:19 crc kubenswrapper[4995]: I0126 23:33:19.814769 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:19 crc kubenswrapper[4995]: W0126 23:33:19.820501 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93b2c055_90b0_4ee2_8155_9d7a63e5a8ac.slice/crio-8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a WatchSource:0}: Error finding container 8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a: Status 404 returned error can't find the container with id 8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.482425 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerStarted","Data":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.482476 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerStarted","Data":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.482488 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerStarted","Data":"4d3284c898b59faa984bdb5db96098bca7f16dd71bef193b7303ce861694df97"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.483889 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.486391 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerStarted","Data":"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.486424 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerStarted","Data":"8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.488457 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerStarted","Data":"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.488502 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerStarted","Data":"381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74"} Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.529820 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.529801537 podStartE2EDuration="2.529801537s" podCreationTimestamp="2026-01-26 23:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:20.524817002 +0000 UTC m=+1504.689524467" watchObservedRunningTime="2026-01-26 23:33:20.529801537 +0000 UTC m=+1504.694509002" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.559304 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.559285745 podStartE2EDuration="2.559285745s" podCreationTimestamp="2026-01-26 23:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:20.548070345 +0000 UTC m=+1504.712777810" watchObservedRunningTime="2026-01-26 23:33:20.559285745 +0000 UTC m=+1504.723993210" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.577706 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.577688656 podStartE2EDuration="2.577688656s" podCreationTimestamp="2026-01-26 23:33:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:20.569424239 +0000 UTC m=+1504.734131694" watchObservedRunningTime="2026-01-26 23:33:20.577688656 +0000 UTC m=+1504.742396121" Jan 26 23:33:20 crc kubenswrapper[4995]: I0126 23:33:20.786626 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:21 crc kubenswrapper[4995]: I0126 23:33:21.978689 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:22 crc kubenswrapper[4995]: I0126 23:33:22.517605 4995 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 23:33:22 crc kubenswrapper[4995]: I0126 23:33:22.772644 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:23 crc kubenswrapper[4995]: I0126 23:33:23.208984 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:24 crc kubenswrapper[4995]: I0126 23:33:24.069179 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:24 crc kubenswrapper[4995]: I0126 23:33:24.154532 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:24 crc kubenswrapper[4995]: I0126 23:33:24.407909 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:25 crc kubenswrapper[4995]: I0126 23:33:25.618425 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:26 crc kubenswrapper[4995]: I0126 23:33:26.801811 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:27 crc kubenswrapper[4995]: I0126 23:33:27.990880 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.068775 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.084780 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.154743 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.180112 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.211463 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.250333 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.286273 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.579018 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.583672 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.607811 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:29 crc kubenswrapper[4995]: I0126 23:33:29.622332 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.399230 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_watcher-kuttl-decision-engine-0_93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/watcher-decision-engine/0.log" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.599870 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.611019 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-gf67f"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.639703 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.641039 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.647860 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.708320 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.740193 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.745822 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.746352 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747517 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747602 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data podName:5cca5bb3-8e8f-412e-a5a7-b0b072f72500 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:31.247582571 +0000 UTC m=+1515.412290096 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data") pod "watcher-kuttl-api-0" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747886 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: E0126 23:33:30.747921 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:31.247912859 +0000 UTC m=+1515.412620324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.780168 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.847967 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.848023 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.849037 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.870828 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"watcher594d-account-delete-csqxj\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:30 crc kubenswrapper[4995]: I0126 23:33:30.960402 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262296 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262572 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data podName:5cca5bb3-8e8f-412e-a5a7-b0b072f72500 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:32.262558524 +0000 UTC m=+1516.427265989 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data") pod "watcher-kuttl-api-0" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262869 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.262898 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:32.262890453 +0000 UTC m=+1516.427597918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.418455 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596054 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" event={"ID":"18d06905-621f-4fcd-96a9-a3da780dbf9f","Type":"ContainerStarted","Data":"694599d83d729f31d133d0ca0d751152908c5ca0a1daa099453eccc3981ddd91"} Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596193 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" containerID="cri-o://e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" gracePeriod=30 Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596414 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" containerID="cri-o://33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" gracePeriod=30 Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.596572 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" containerID="cri-o://1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" gracePeriod=30 Jan 26 23:33:31 crc kubenswrapper[4995]: I0126 23:33:31.597047 4995 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" secret="" err="secret \"watcher-watcher-kuttl-dockercfg-7pps7\" not found" Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.770853 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:31 crc kubenswrapper[4995]: E0126 23:33:31.770947 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:32.270924723 +0000 UTC m=+1516.435632218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.278938 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279469 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:33.279447795 +0000 UTC m=+1517.444155260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279230 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279593 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:34.279568468 +0000 UTC m=+1518.444276003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279279 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-api-config-data: secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.279629 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data podName:5cca5bb3-8e8f-412e-a5a7-b0b072f72500 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:34.279621409 +0000 UTC m=+1518.444329004 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data") pod "watcher-kuttl-api-0" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500") : secret "watcher-kuttl-api-config-data" not found Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.510777 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.530788 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64055a76-6d73-45e6-8c44-424f42362b20" path="/var/lib/kubelet/pods/64055a76-6d73-45e6-8c44-424f42362b20/volumes" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603779 4995 generic.go:334] "Generic (PLEG): container finished" podID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" exitCode=0 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603816 4995 generic.go:334] "Generic (PLEG): container finished" podID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" exitCode=143 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603829 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603879 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerDied","Data":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603921 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerDied","Data":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603937 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"5cca5bb3-8e8f-412e-a5a7-b0b072f72500","Type":"ContainerDied","Data":"4d3284c898b59faa984bdb5db96098bca7f16dd71bef193b7303ce861694df97"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.603974 4995 scope.go:117] "RemoveContainer" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.607289 4995 generic.go:334] "Generic (PLEG): container finished" podID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerID="bbc420fd12fe1d211845fe7f68211386fea3f13c2e6223073fc5536f18ea16a2" exitCode=0 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.607341 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" event={"ID":"18d06905-621f-4fcd-96a9-a3da780dbf9f","Type":"ContainerDied","Data":"bbc420fd12fe1d211845fe7f68211386fea3f13c2e6223073fc5536f18ea16a2"} Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.607438 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" containerID="cri-o://7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" gracePeriod=30 Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.633882 4995 scope.go:117] "RemoveContainer" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.649984 4995 scope.go:117] "RemoveContainer" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.650364 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": container with ID starting with 1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598 not found: ID does not exist" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650391 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} err="failed to get container status \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": rpc error: code = NotFound desc = could not find container \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": container with ID starting with 1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598 not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650411 4995 scope.go:117] "RemoveContainer" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: E0126 23:33:32.650579 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": container with ID starting with e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e not found: ID does not exist" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650602 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} err="failed to get container status \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": rpc error: code = NotFound desc = could not find container \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": container with ID starting with e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650616 4995 scope.go:117] "RemoveContainer" containerID="1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650776 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598"} err="failed to get container status \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": rpc error: code = NotFound desc = could not find container \"1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598\": container with ID starting with 1df014028a1ef076e4b91c13050a9365e7488a78e2ef627fc430b68b7a5ba598 not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.650794 4995 scope.go:117] "RemoveContainer" containerID="e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.651004 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e"} err="failed to get container status \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": rpc error: code = NotFound desc = could not find container \"e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e\": container with ID starting with e2626a8669c26be3720c67e756d921cd0949060a70c100c4a2f299ab130d887e not found: ID does not exist" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685660 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685714 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685770 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685936 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.685985 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.686031 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") pod \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\" (UID: \"5cca5bb3-8e8f-412e-a5a7-b0b072f72500\") " Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.686715 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs" (OuterVolumeSpecName: "logs") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.709194 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45" (OuterVolumeSpecName: "kube-api-access-ksq45") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "kube-api-access-ksq45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.713650 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.714004 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.741617 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data" (OuterVolumeSpecName: "config-data") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.767638 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5cca5bb3-8e8f-412e-a5a7-b0b072f72500" (UID: "5cca5bb3-8e8f-412e-a5a7-b0b072f72500"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787594 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787627 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787637 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787646 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787656 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.787666 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksq45\" (UniqueName: \"kubernetes.io/projected/5cca5bb3-8e8f-412e-a5a7-b0b072f72500-kube-api-access-ksq45\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.961752 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:32 crc kubenswrapper[4995]: I0126 23:33:32.970274 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:33 crc kubenswrapper[4995]: E0126 23:33:33.294878 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:33 crc kubenswrapper[4995]: E0126 23:33:33.295170 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:35.295153294 +0000 UTC m=+1519.459860759 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.020214 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.114520 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") pod \"18d06905-621f-4fcd-96a9-a3da780dbf9f\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.114619 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") pod \"18d06905-621f-4fcd-96a9-a3da780dbf9f\" (UID: \"18d06905-621f-4fcd-96a9-a3da780dbf9f\") " Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.115729 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18d06905-621f-4fcd-96a9-a3da780dbf9f" (UID: "18d06905-621f-4fcd-96a9-a3da780dbf9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.119402 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr" (OuterVolumeSpecName: "kube-api-access-g8fzr") pod "18d06905-621f-4fcd-96a9-a3da780dbf9f" (UID: "18d06905-621f-4fcd-96a9-a3da780dbf9f"). InnerVolumeSpecName "kube-api-access-g8fzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.159966 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.161617 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.163765 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.163792 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.215961 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18d06905-621f-4fcd-96a9-a3da780dbf9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.216002 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8fzr\" (UniqueName: \"kubernetes.io/projected/18d06905-621f-4fcd-96a9-a3da780dbf9f-kube-api-access-g8fzr\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.317928 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-applier-config-data: secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:34 crc kubenswrapper[4995]: E0126 23:33:34.318011 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data podName:2153945e-4846-45d3-8e7c-dfaff880bbc8 nodeName:}" failed. No retries permitted until 2026-01-26 23:33:38.317994034 +0000 UTC m=+1522.482701509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data") pod "watcher-kuttl-applier-0" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8") : secret "watcher-kuttl-applier-config-data" not found Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.532628 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" path="/var/lib/kubelet/pods/5cca5bb3-8e8f-412e-a5a7-b0b072f72500/volumes" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.634520 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" event={"ID":"18d06905-621f-4fcd-96a9-a3da780dbf9f","Type":"ContainerDied","Data":"694599d83d729f31d133d0ca0d751152908c5ca0a1daa099453eccc3981ddd91"} Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.634577 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="694599d83d729f31d133d0ca0d751152908c5ca0a1daa099453eccc3981ddd91" Jan 26 23:33:34 crc kubenswrapper[4995]: I0126 23:33:34.634539 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher594d-account-delete-csqxj" Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.338372 4995 secret.go:188] Couldn't get secret watcher-kuttl-default/watcher-kuttl-decision-engine-config-data: secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.338521 4995 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data podName:93b2c055-90b0-4ee2-8155-9d7a63e5a8ac nodeName:}" failed. No retries permitted until 2026-01-26 23:33:39.338486274 +0000 UTC m=+1523.503193779 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data") pod "watcher-kuttl-decision-engine-0" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac") : secret "watcher-kuttl-decision-engine-config-data" not found Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.697225 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.712249 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-65c6n"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.722542 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.732496 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher594d-account-delete-csqxj"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.741465 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.747814 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-594d-account-create-update-54znd"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.853860 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.854633 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerName="mariadb-account-delete" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854651 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerName="mariadb-account-delete" Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.854680 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854687 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" Jan 26 23:33:35 crc kubenswrapper[4995]: E0126 23:33:35.854698 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854704 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854863 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" containerName="mariadb-account-delete" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854889 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-api" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.854902 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cca5bb3-8e8f-412e-a5a7-b0b072f72500" containerName="watcher-kuttl-api-log" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.855462 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.914942 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.950462 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.950582 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.950641 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.951589 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.954942 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:33:35 crc kubenswrapper[4995]: I0126 23:33:35.967934 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052178 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052207 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052237 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.052954 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.077053 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"watcher-db-create-c64b2\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.153552 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.153622 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.154794 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.172694 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.176296 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"watcher-1555-account-create-update-j8dp6\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.280442 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.610901 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.611667 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb78169-d22d-4b1a-a51b-ad25391e10e9" path="/var/lib/kubelet/pods/0eb78169-d22d-4b1a-a51b-ad25391e10e9/volumes" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.613041 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d06905-621f-4fcd-96a9-a3da780dbf9f" path="/var/lib/kubelet/pods/18d06905-621f-4fcd-96a9-a3da780dbf9f/volumes" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.615679 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35e92c48-e139-4a90-8601-1bd4d2937700" path="/var/lib/kubelet/pods/35e92c48-e139-4a90-8601-1bd4d2937700/volumes" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.617412 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:36 crc kubenswrapper[4995]: E0126 23:33:36.617716 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.617731 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.618150 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerName="watcher-applier" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.619726 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.625730 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668384 4995 generic.go:334] "Generic (PLEG): container finished" podID="2153945e-4846-45d3-8e7c-dfaff880bbc8" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" exitCode=0 Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668431 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerDied","Data":"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02"} Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668450 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668471 4995 scope.go:117] "RemoveContainer" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.668459 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"2153945e-4846-45d3-8e7c-dfaff880bbc8","Type":"ContainerDied","Data":"381b689ccc249c5529258e69f3905511aa53d241bbfd4a548a025214c010ca74"} Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.708414 4995 scope.go:117] "RemoveContainer" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" Jan 26 23:33:36 crc kubenswrapper[4995]: E0126 23:33:36.709035 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02\": container with ID starting with 33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02 not found: ID does not exist" containerID="33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.709062 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02"} err="failed to get container status \"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02\": rpc error: code = NotFound desc = could not find container \"33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02\": container with ID starting with 33fb7554161ed791053c8d550eed8d1fbb45b4dccce7cb22997c21e70d7e4f02 not found: ID does not exist" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.711145 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.711250 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.711439 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.776502 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812184 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812237 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812280 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812303 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812340 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") pod \"2153945e-4846-45d3-8e7c-dfaff880bbc8\" (UID: \"2153945e-4846-45d3-8e7c-dfaff880bbc8\") " Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812553 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812642 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812667 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.812825 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs" (OuterVolumeSpecName: "logs") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.813939 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.817076 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.819827 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f" (OuterVolumeSpecName: "kube-api-access-9tl8f") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "kube-api-access-9tl8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.854866 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"redhat-marketplace-n58tq\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.857706 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.869540 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.914862 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2153945e-4846-45d3-8e7c-dfaff880bbc8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.914906 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tl8f\" (UniqueName: \"kubernetes.io/projected/2153945e-4846-45d3-8e7c-dfaff880bbc8-kube-api-access-9tl8f\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.914920 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.916227 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data" (OuterVolumeSpecName: "config-data") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.923289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2153945e-4846-45d3-8e7c-dfaff880bbc8" (UID: "2153945e-4846-45d3-8e7c-dfaff880bbc8"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.956131 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:36 crc kubenswrapper[4995]: I0126 23:33:36.962032 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.016565 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.017020 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2153945e-4846-45d3-8e7c-dfaff880bbc8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.016622 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.031141 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.478344 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629074 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629172 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629220 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629274 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629312 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.629368 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") pod \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\" (UID: \"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac\") " Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.630761 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs" (OuterVolumeSpecName: "logs") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.640501 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.656137 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z" (OuterVolumeSpecName: "kube-api-access-ksj9z") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "kube-api-access-ksj9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.668273 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.695388 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data" (OuterVolumeSpecName: "config-data") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.701134 4995 generic.go:334] "Generic (PLEG): container finished" podID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerID="4af14df6baf5e2d7f5d921b037ff739c3922a94531b7d54b66151b9b3794fdee" exitCode=0 Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.701211 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-c64b2" event={"ID":"ca2f73d1-0380-4fcf-9fde-35f821426fed","Type":"ContainerDied","Data":"4af14df6baf5e2d7f5d921b037ff739c3922a94531b7d54b66151b9b3794fdee"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.701247 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-c64b2" event={"ID":"ca2f73d1-0380-4fcf-9fde-35f821426fed","Type":"ContainerStarted","Data":"f0828b9c5c98a0cabed1aba541dbb0ad7c039d4728621a1138fa826256600b06"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.705005 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerStarted","Data":"145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.705060 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerStarted","Data":"bcbdb30b1631e4cc7ba330a7b5bac0ee6f3b18ba19f15b08d579b5924d2c1362"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707154 4995 generic.go:334] "Generic (PLEG): container finished" podID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" exitCode=0 Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707201 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerDied","Data":"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707232 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"93b2c055-90b0-4ee2-8155-9d7a63e5a8ac","Type":"ContainerDied","Data":"8654007c1ca8f98c665a231383230a614f26830fc3180c6562d94c6912d21a0a"} Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707255 4995 scope.go:117] "RemoveContainer" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.707368 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.720389 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.732427 4995 scope.go:117] "RemoveContainer" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733678 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733785 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksj9z\" (UniqueName: \"kubernetes.io/projected/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-kube-api-access-ksj9z\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733878 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.733957 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.734041 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:37 crc kubenswrapper[4995]: E0126 23:33:37.733843 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565\": container with ID starting with 7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565 not found: ID does not exist" containerID="7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.734236 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565"} err="failed to get container status \"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565\": rpc error: code = NotFound desc = could not find container \"7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565\": container with ID starting with 7b43b8ae047e1361893020ea0b66dce6b5cb0e45ccfe3c69663046e926ae7565 not found: ID does not exist" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.752215 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" (UID: "93b2c055-90b0-4ee2-8155-9d7a63e5a8ac"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:37 crc kubenswrapper[4995]: I0126 23:33:37.835976 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.034329 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" podStartSLOduration=3.03431199 podStartE2EDuration="3.03431199s" podCreationTimestamp="2026-01-26 23:33:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:37.744178765 +0000 UTC m=+1521.908886230" watchObservedRunningTime="2026-01-26 23:33:38.03431199 +0000 UTC m=+1522.199019455" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.039405 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.049536 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.530205 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2153945e-4846-45d3-8e7c-dfaff880bbc8" path="/var/lib/kubelet/pods/2153945e-4846-45d3-8e7c-dfaff880bbc8/volumes" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.531066 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" path="/var/lib/kubelet/pods/93b2c055-90b0-4ee2-8155-9d7a63e5a8ac/volumes" Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.716297 4995 generic.go:334] "Generic (PLEG): container finished" podID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" exitCode=0 Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.716853 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68"} Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.717780 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerStarted","Data":"ad6e8d03448bf620648f7ff3a9ff045c078111f4db8adc6377c026512acd50ae"} Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.722486 4995 generic.go:334] "Generic (PLEG): container finished" podID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerID="145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1" exitCode=0 Jan 26 23:33:38 crc kubenswrapper[4995]: I0126 23:33:38.722542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerDied","Data":"145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1"} Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.159833 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.262044 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") pod \"ca2f73d1-0380-4fcf-9fde-35f821426fed\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.262210 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") pod \"ca2f73d1-0380-4fcf-9fde-35f821426fed\" (UID: \"ca2f73d1-0380-4fcf-9fde-35f821426fed\") " Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.262962 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca2f73d1-0380-4fcf-9fde-35f821426fed" (UID: "ca2f73d1-0380-4fcf-9fde-35f821426fed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.267756 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8" (OuterVolumeSpecName: "kube-api-access-jj9d8") pod "ca2f73d1-0380-4fcf-9fde-35f821426fed" (UID: "ca2f73d1-0380-4fcf-9fde-35f821426fed"). InnerVolumeSpecName "kube-api-access-jj9d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.363783 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca2f73d1-0380-4fcf-9fde-35f821426fed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.364025 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj9d8\" (UniqueName: \"kubernetes.io/projected/ca2f73d1-0380-4fcf-9fde-35f821426fed-kube-api-access-jj9d8\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.732599 4995 generic.go:334] "Generic (PLEG): container finished" podID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" exitCode=0 Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.732852 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf"} Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.736308 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-c64b2" Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.736808 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-c64b2" event={"ID":"ca2f73d1-0380-4fcf-9fde-35f821426fed","Type":"ContainerDied","Data":"f0828b9c5c98a0cabed1aba541dbb0ad7c039d4728621a1138fa826256600b06"} Jan 26 23:33:39 crc kubenswrapper[4995]: I0126 23:33:39.736943 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0828b9c5c98a0cabed1aba541dbb0ad7c039d4728621a1138fa826256600b06" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.082385 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175369 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") pod \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175522 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") pod \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\" (UID: \"599bdb97-9d21-44b9-9a59-84320b1c4a6e\") " Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175786 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "599bdb97-9d21-44b9-9a59-84320b1c4a6e" (UID: "599bdb97-9d21-44b9-9a59-84320b1c4a6e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.175882 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/599bdb97-9d21-44b9-9a59-84320b1c4a6e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.179760 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6" (OuterVolumeSpecName: "kube-api-access-5w9s6") pod "599bdb97-9d21-44b9-9a59-84320b1c4a6e" (UID: "599bdb97-9d21-44b9-9a59-84320b1c4a6e"). InnerVolumeSpecName "kube-api-access-5w9s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.277373 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w9s6\" (UniqueName: \"kubernetes.io/projected/599bdb97-9d21-44b9-9a59-84320b1c4a6e-kube-api-access-5w9s6\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.749964 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" event={"ID":"599bdb97-9d21-44b9-9a59-84320b1c4a6e","Type":"ContainerDied","Data":"bcbdb30b1631e4cc7ba330a7b5bac0ee6f3b18ba19f15b08d579b5924d2c1362"} Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.750011 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcbdb30b1631e4cc7ba330a7b5bac0ee6f3b18ba19f15b08d579b5924d2c1362" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.750020 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-1555-account-create-update-j8dp6" Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.752940 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerStarted","Data":"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0"} Jan 26 23:33:40 crc kubenswrapper[4995]: I0126 23:33:40.781048 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n58tq" podStartSLOduration=3.344474552 podStartE2EDuration="4.781022049s" podCreationTimestamp="2026-01-26 23:33:36 +0000 UTC" firstStartedPulling="2026-01-26 23:33:38.718625823 +0000 UTC m=+1522.883333298" lastFinishedPulling="2026-01-26 23:33:40.1551733 +0000 UTC m=+1524.319880795" observedRunningTime="2026-01-26 23:33:40.776426344 +0000 UTC m=+1524.941133819" watchObservedRunningTime="2026-01-26 23:33:40.781022049 +0000 UTC m=+1524.945729514" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.345063 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:33:46 crc kubenswrapper[4995]: E0126 23:33:46.346124 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerName="mariadb-database-create" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346140 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerName="mariadb-database-create" Jan 26 23:33:46 crc kubenswrapper[4995]: E0126 23:33:46.346161 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerName="mariadb-account-create-update" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346169 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerName="mariadb-account-create-update" Jan 26 23:33:46 crc kubenswrapper[4995]: E0126 23:33:46.346186 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346195 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346374 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b2c055-90b0-4ee2-8155-9d7a63e5a8ac" containerName="watcher-decision-engine" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346387 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" containerName="mariadb-account-create-update" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346402 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" containerName="mariadb-database-create" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.346993 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.349128 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.352210 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.352927 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jcwzq" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478420 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478740 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478775 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.478821 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.580317 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.580872 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.582071 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.582172 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.587817 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.588112 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.588678 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.604071 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"watcher-kuttl-db-sync-d4txg\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.669420 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.957357 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:46 crc kubenswrapper[4995]: I0126 23:33:46.957692 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.006238 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.204568 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.821319 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerStarted","Data":"2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3"} Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.823078 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerStarted","Data":"a9f51607f45c39b4bbc7c12c507945758f52823611d78a21f56d37cba1b237c8"} Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.841336 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" podStartSLOduration=1.8413055790000001 podStartE2EDuration="1.841305579s" podCreationTimestamp="2026-01-26 23:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:47.839405841 +0000 UTC m=+1532.004113306" watchObservedRunningTime="2026-01-26 23:33:47.841305579 +0000 UTC m=+1532.006013044" Jan 26 23:33:47 crc kubenswrapper[4995]: I0126 23:33:47.869429 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:49 crc kubenswrapper[4995]: I0126 23:33:49.838437 4995 generic.go:334] "Generic (PLEG): container finished" podID="9b1297fe-4233-44a3-864c-2564bef1017f" containerID="2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3" exitCode=0 Jan 26 23:33:49 crc kubenswrapper[4995]: I0126 23:33:49.838513 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerDied","Data":"2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3"} Jan 26 23:33:50 crc kubenswrapper[4995]: I0126 23:33:50.536690 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:50 crc kubenswrapper[4995]: I0126 23:33:50.887986 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n58tq" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" containerID="cri-o://55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" gracePeriod=2 Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.358898 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367272 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367335 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367446 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.367486 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") pod \"9b1297fe-4233-44a3-864c-2564bef1017f\" (UID: \"9b1297fe-4233-44a3-864c-2564bef1017f\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.377623 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.377696 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc" (OuterVolumeSpecName: "kube-api-access-dg9wc") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "kube-api-access-dg9wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.406355 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.435448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data" (OuterVolumeSpecName: "config-data") pod "9b1297fe-4233-44a3-864c-2564bef1017f" (UID: "9b1297fe-4233-44a3-864c-2564bef1017f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.468975 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.469005 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.469020 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg9wc\" (UniqueName: \"kubernetes.io/projected/9b1297fe-4233-44a3-864c-2564bef1017f-kube-api-access-dg9wc\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.469031 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9b1297fe-4233-44a3-864c-2564bef1017f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.470718 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.569499 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") pod \"08dc9823-c5ed-451c-a202-312e9b4cd254\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.569972 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") pod \"08dc9823-c5ed-451c-a202-312e9b4cd254\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.570016 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") pod \"08dc9823-c5ed-451c-a202-312e9b4cd254\" (UID: \"08dc9823-c5ed-451c-a202-312e9b4cd254\") " Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.582238 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities" (OuterVolumeSpecName: "utilities") pod "08dc9823-c5ed-451c-a202-312e9b4cd254" (UID: "08dc9823-c5ed-451c-a202-312e9b4cd254"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.582969 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n" (OuterVolumeSpecName: "kube-api-access-njx4n") pod "08dc9823-c5ed-451c-a202-312e9b4cd254" (UID: "08dc9823-c5ed-451c-a202-312e9b4cd254"). InnerVolumeSpecName "kube-api-access-njx4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.591940 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08dc9823-c5ed-451c-a202-312e9b4cd254" (UID: "08dc9823-c5ed-451c-a202-312e9b4cd254"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.671344 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njx4n\" (UniqueName: \"kubernetes.io/projected/08dc9823-c5ed-451c-a202-312e9b4cd254-kube-api-access-njx4n\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.671409 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.671423 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08dc9823-c5ed-451c-a202-312e9b4cd254-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.899286 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" event={"ID":"9b1297fe-4233-44a3-864c-2564bef1017f","Type":"ContainerDied","Data":"a9f51607f45c39b4bbc7c12c507945758f52823611d78a21f56d37cba1b237c8"} Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.899330 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9f51607f45c39b4bbc7c12c507945758f52823611d78a21f56d37cba1b237c8" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.899442 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-d4txg" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902726 4995 generic.go:334] "Generic (PLEG): container finished" podID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" exitCode=0 Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902752 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0"} Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902769 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n58tq" event={"ID":"08dc9823-c5ed-451c-a202-312e9b4cd254","Type":"ContainerDied","Data":"ad6e8d03448bf620648f7ff3a9ff045c078111f4db8adc6377c026512acd50ae"} Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902787 4995 scope.go:117] "RemoveContainer" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.902854 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n58tq" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.932296 4995 scope.go:117] "RemoveContainer" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.969087 4995 scope.go:117] "RemoveContainer" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" Jan 26 23:33:51 crc kubenswrapper[4995]: I0126 23:33:51.976504 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.004606 4995 scope.go:117] "RemoveContainer" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.005058 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0\": container with ID starting with 55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0 not found: ID does not exist" containerID="55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005144 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0"} err="failed to get container status \"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0\": rpc error: code = NotFound desc = could not find container \"55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0\": container with ID starting with 55b1050a699d66f1c8870cdfe20f91796fc53dffa037cee1de3821ae857cf3f0 not found: ID does not exist" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005176 4995 scope.go:117] "RemoveContainer" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.005493 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf\": container with ID starting with 629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf not found: ID does not exist" containerID="629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005527 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf"} err="failed to get container status \"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf\": rpc error: code = NotFound desc = could not find container \"629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf\": container with ID starting with 629380fe7bebf4b3221e7174af40b1715eae45363feb1e7ff8a8f9246d2e2dcf not found: ID does not exist" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005548 4995 scope.go:117] "RemoveContainer" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.005782 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68\": container with ID starting with 62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68 not found: ID does not exist" containerID="62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005802 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68"} err="failed to get container status \"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68\": rpc error: code = NotFound desc = could not find container \"62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68\": container with ID starting with 62f340aa0aeb415df065f0f3a135011b2d9a6272afd0553ed168bc30c7211f68 not found: ID does not exist" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.005907 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n58tq"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.104908 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105262 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105274 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105299 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-content" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105305 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-content" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105314 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" containerName="watcher-kuttl-db-sync" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105320 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" containerName="watcher-kuttl-db-sync" Jan 26 23:33:52 crc kubenswrapper[4995]: E0126 23:33:52.105336 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-utilities" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105341 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="extract-utilities" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105467 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" containerName="registry-server" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.105481 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" containerName="watcher-kuttl-db-sync" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.106038 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.108138 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.112293 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-jcwzq" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.115594 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177725 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177774 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177813 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177851 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177869 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.177892 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.211382 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.212921 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.214836 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.232049 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.234638 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.239965 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.257209 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.258463 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.261378 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.267853 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.275852 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280150 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280197 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280237 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280257 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280280 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280300 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280316 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280354 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280379 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280399 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280421 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280447 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280470 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280491 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280527 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280544 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280573 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280595 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280619 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280639 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280656 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280671 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.280697 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.284693 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.286368 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.288603 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.288792 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.293216 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.302925 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383166 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383285 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383332 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383380 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383412 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383462 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383484 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383525 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383565 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383618 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383659 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383711 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383775 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383803 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383857 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383893 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.383915 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.384542 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.384610 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.385280 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.387364 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.387467 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.387613 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.390699 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.390989 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.391124 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.391396 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.393258 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.394059 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.394460 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.394829 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.402281 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"watcher-kuttl-api-1\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.405373 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"watcher-kuttl-api-0\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.409169 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"watcher-kuttl-applier-0\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.444543 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.530160 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08dc9823-c5ed-451c-a202-312e9b4cd254" path="/var/lib/kubelet/pods/08dc9823-c5ed-451c-a202-312e9b4cd254/volumes" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.531914 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.560930 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.578418 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.885498 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:33:52 crc kubenswrapper[4995]: W0126 23:33:52.889153 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd4eea70_3af8_412b_8a7f_8abda2350f7a.slice/crio-cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b WatchSource:0}: Error finding container cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b: Status 404 returned error can't find the container with id cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b Jan 26 23:33:52 crc kubenswrapper[4995]: I0126 23:33:52.921613 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerStarted","Data":"cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.059780 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.113175 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.125687 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:33:53 crc kubenswrapper[4995]: W0126 23:33:53.160063 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd4c263_1050_4645_a224_8e1f758e4495.slice/crio-dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e WatchSource:0}: Error finding container dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e: Status 404 returned error can't find the container with id dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931036 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerStarted","Data":"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931366 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerStarted","Data":"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931401 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.931412 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerStarted","Data":"3a6cd3187438302097e41f8997601ad4ed78776668fbf86e5b5c905b7d06906d"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.932840 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerStarted","Data":"00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.932881 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerStarted","Data":"a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.932894 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerStarted","Data":"2b794c5d957bd49aebae6d5afbe71ffae4c423dc061005fe477c13b7c05312fd"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.935673 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerStarted","Data":"7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.937420 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerStarted","Data":"d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.937446 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerStarted","Data":"dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e"} Jan 26 23:33:53 crc kubenswrapper[4995]: I0126 23:33:53.973652 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=1.973631935 podStartE2EDuration="1.973631935s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:53.970439625 +0000 UTC m=+1538.135147090" watchObservedRunningTime="2026-01-26 23:33:53.973631935 +0000 UTC m=+1538.138339410" Jan 26 23:33:54 crc kubenswrapper[4995]: I0126 23:33:54.025905 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.025883053 podStartE2EDuration="2.025883053s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:54.015811421 +0000 UTC m=+1538.180518886" watchObservedRunningTime="2026-01-26 23:33:54.025883053 +0000 UTC m=+1538.190590518" Jan 26 23:33:54 crc kubenswrapper[4995]: I0126 23:33:54.109330 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.109314332 podStartE2EDuration="2.109314332s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:54.065192258 +0000 UTC m=+1538.229899723" watchObservedRunningTime="2026-01-26 23:33:54.109314332 +0000 UTC m=+1538.274021797" Jan 26 23:33:54 crc kubenswrapper[4995]: I0126 23:33:54.946504 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:56 crc kubenswrapper[4995]: I0126 23:33:56.152589 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:56 crc kubenswrapper[4995]: I0126 23:33:56.189844 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=4.189813682 podStartE2EDuration="4.189813682s" podCreationTimestamp="2026-01-26 23:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:33:54.1084173 +0000 UTC m=+1538.273124755" watchObservedRunningTime="2026-01-26 23:33:56.189813682 +0000 UTC m=+1540.354521147" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.119448 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.533401 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.562306 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:33:57 crc kubenswrapper[4995]: I0126 23:33:57.579111 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.444988 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.497873 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.537330 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.546990 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.563055 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.579838 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.580707 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:02 crc kubenswrapper[4995]: I0126 23:34:02.625777 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.031356 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.047613 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.054631 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.069834 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:03 crc kubenswrapper[4995]: I0126 23:34:03.070671 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.532713 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533270 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" containerID="cri-o://c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" gracePeriod=30 Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533336 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" containerID="cri-o://932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" gracePeriod=30 Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533315 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" containerID="cri-o://3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" gracePeriod=30 Jan 26 23:34:05 crc kubenswrapper[4995]: I0126 23:34:05.533351 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" containerID="cri-o://3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" gracePeriod=30 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055477 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" exitCode=0 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055508 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" exitCode=2 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055515 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" exitCode=0 Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055533 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca"} Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055558 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427"} Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.055567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468"} Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.700087 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766686 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766777 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766840 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766927 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.766960 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.767121 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.767194 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.767254 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") pod \"d3ce857e-376e-4fd3-b74a-17165502ac6d\" (UID: \"d3ce857e-376e-4fd3-b74a-17165502ac6d\") " Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.775448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.776184 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.778947 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm" (OuterVolumeSpecName: "kube-api-access-wfszm") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "kube-api-access-wfszm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.783431 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts" (OuterVolumeSpecName: "scripts") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.816793 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.832139 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.866574 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.868671 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data" (OuterVolumeSpecName: "config-data") pod "d3ce857e-376e-4fd3-b74a-17165502ac6d" (UID: "d3ce857e-376e-4fd3-b74a-17165502ac6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869622 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869649 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869662 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfszm\" (UniqueName: \"kubernetes.io/projected/d3ce857e-376e-4fd3-b74a-17165502ac6d-kube-api-access-wfszm\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869670 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869678 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869686 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869693 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d3ce857e-376e-4fd3-b74a-17165502ac6d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:06 crc kubenswrapper[4995]: I0126 23:34:06.869701 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d3ce857e-376e-4fd3-b74a-17165502ac6d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067155 4995 generic.go:334] "Generic (PLEG): container finished" podID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" exitCode=0 Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067224 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067250 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885"} Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067302 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"d3ce857e-376e-4fd3-b74a-17165502ac6d","Type":"ContainerDied","Data":"58ac57ce8561ca84068d3e00e6215e8d0c2515e11d417b9339ed05b0b53177bc"} Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.067379 4995 scope.go:117] "RemoveContainer" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.091558 4995 scope.go:117] "RemoveContainer" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.107022 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.113711 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.119481 4995 scope.go:117] "RemoveContainer" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.141436 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.141990 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142014 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.142032 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142040 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.142057 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142068 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.142081 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142087 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142317 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-central-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142334 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="ceilometer-notification-agent" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142346 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="sg-core" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.142361 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" containerName="proxy-httpd" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.145314 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.149846 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.149997 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.150021 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.158244 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.170792 4995 scope.go:117] "RemoveContainer" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177371 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177460 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177504 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177538 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.177989 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.178040 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.178080 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.178882 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.204643 4995 scope.go:117] "RemoveContainer" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.216637 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca\": container with ID starting with 932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca not found: ID does not exist" containerID="932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.216712 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca"} err="failed to get container status \"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca\": rpc error: code = NotFound desc = could not find container \"932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca\": container with ID starting with 932c9428407bd13d26488015e04fb973e84151ee9072a3567634a96adc6b92ca not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.216825 4995 scope.go:117] "RemoveContainer" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.217593 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427\": container with ID starting with 3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427 not found: ID does not exist" containerID="3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.217717 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427"} err="failed to get container status \"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427\": rpc error: code = NotFound desc = could not find container \"3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427\": container with ID starting with 3b0bd43ab7ef357eaf7e4f3ed55a7e3f5aebcc15b54bdb8310ae3bc75fecf427 not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.217751 4995 scope.go:117] "RemoveContainer" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.218586 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885\": container with ID starting with 3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885 not found: ID does not exist" containerID="3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.218636 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885"} err="failed to get container status \"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885\": rpc error: code = NotFound desc = could not find container \"3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885\": container with ID starting with 3a3cc9ba0cdebd6e73f1ca011d35bd1550b79bbdbf678e332fc00499173f2885 not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.218666 4995 scope.go:117] "RemoveContainer" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" Jan 26 23:34:07 crc kubenswrapper[4995]: E0126 23:34:07.218958 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468\": container with ID starting with c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468 not found: ID does not exist" containerID="c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.219002 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468"} err="failed to get container status \"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468\": rpc error: code = NotFound desc = could not find container \"c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468\": container with ID starting with c52bd5dadc3300c0a7e79e06b6da1c9f3c53e8daf3b445968ecfa37ba6541468 not found: ID does not exist" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280530 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280599 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280661 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280708 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280750 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280781 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280808 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280866 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.280968 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.281756 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.290648 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.291086 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.291428 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.292224 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.293209 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.303835 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"ceilometer-0\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.463051 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:07 crc kubenswrapper[4995]: I0126 23:34:07.997459 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:08 crc kubenswrapper[4995]: W0126 23:34:08.001876 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3386474_d50c_4dcf_b6b5_9aae87610ee5.slice/crio-464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62 WatchSource:0}: Error finding container 464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62: Status 404 returned error can't find the container with id 464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62 Jan 26 23:34:08 crc kubenswrapper[4995]: I0126 23:34:08.077413 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62"} Jan 26 23:34:08 crc kubenswrapper[4995]: I0126 23:34:08.529012 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3ce857e-376e-4fd3-b74a-17165502ac6d" path="/var/lib/kubelet/pods/d3ce857e-376e-4fd3-b74a-17165502ac6d/volumes" Jan 26 23:34:09 crc kubenswrapper[4995]: I0126 23:34:09.085639 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda"} Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.099244 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e"} Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.641347 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.653223 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.699506 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838117 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838173 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838198 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838490 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.838584 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.940069 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.940514 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941658 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941705 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941726 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.941790 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.942644 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.944802 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.946429 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.947786 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.947816 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:10 crc kubenswrapper[4995]: I0126 23:34:10.958868 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"watcher-kuttl-api-2\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:11 crc kubenswrapper[4995]: I0126 23:34:11.014191 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:11 crc kubenswrapper[4995]: I0126 23:34:11.111964 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1"} Jan 26 23:34:11 crc kubenswrapper[4995]: W0126 23:34:11.497406 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2648dc76_5b29_4f04_817d_f0fdd488f830.slice/crio-49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d WatchSource:0}: Error finding container 49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d: Status 404 returned error can't find the container with id 49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d Jan 26 23:34:11 crc kubenswrapper[4995]: I0126 23:34:11.524910 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.124451 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerStarted","Data":"3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.125233 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.126806 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerStarted","Data":"1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.126830 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerStarted","Data":"e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.126840 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerStarted","Data":"49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d"} Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.127427 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.129841 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.200:9322/\": dial tcp 10.217.0.200:9322: connect: connection refused" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.151693 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.595581165 podStartE2EDuration="5.15167577s" podCreationTimestamp="2026-01-26 23:34:07 +0000 UTC" firstStartedPulling="2026-01-26 23:34:08.004397375 +0000 UTC m=+1552.169104840" lastFinishedPulling="2026-01-26 23:34:11.56049197 +0000 UTC m=+1555.725199445" observedRunningTime="2026-01-26 23:34:12.151049845 +0000 UTC m=+1556.315757320" watchObservedRunningTime="2026-01-26 23:34:12.15167577 +0000 UTC m=+1556.316383235" Jan 26 23:34:12 crc kubenswrapper[4995]: I0126 23:34:12.185655 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-2" podStartSLOduration=2.18562014 podStartE2EDuration="2.18562014s" podCreationTimestamp="2026-01-26 23:34:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:12.182780849 +0000 UTC m=+1556.347488314" watchObservedRunningTime="2026-01-26 23:34:12.18562014 +0000 UTC m=+1556.350327605" Jan 26 23:34:15 crc kubenswrapper[4995]: I0126 23:34:15.470144 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:16 crc kubenswrapper[4995]: I0126 23:34:16.014816 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:18 crc kubenswrapper[4995]: I0126 23:34:18.431426 4995 scope.go:117] "RemoveContainer" containerID="5c8b671cebf48be8f42cb3eef0c6c4d073d6c81d7a64dfc2632acbf31acbc964" Jan 26 23:34:18 crc kubenswrapper[4995]: I0126 23:34:18.466850 4995 scope.go:117] "RemoveContainer" containerID="21fc0623b802d82a641a134593a6142947f12ed59ae9a3e0731b353104bba872" Jan 26 23:34:21 crc kubenswrapper[4995]: I0126 23:34:21.014990 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:21 crc kubenswrapper[4995]: I0126 23:34:21.026027 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:21 crc kubenswrapper[4995]: I0126 23:34:21.233848 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.323749 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.374849 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.375207 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" containerID="cri-o://a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533" gracePeriod=30 Jan 26 23:34:22 crc kubenswrapper[4995]: I0126 23:34:22.375798 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" containerID="cri-o://00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4" gracePeriod=30 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.181977 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.197:9322/\": read tcp 10.217.0.2:42618->10.217.0.197:9322: read: connection reset by peer" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.182003 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.197:9322/\": read tcp 10.217.0.2:42620->10.217.0.197:9322: read: connection reset by peer" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255150 4995 generic.go:334] "Generic (PLEG): container finished" podID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerID="00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4" exitCode=0 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255198 4995 generic.go:334] "Generic (PLEG): container finished" podID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerID="a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533" exitCode=143 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255233 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerDied","Data":"00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4"} Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255289 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerDied","Data":"a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533"} Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255477 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" containerID="cri-o://e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d" gracePeriod=30 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.255631 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-2" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" containerID="cri-o://1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0" gracePeriod=30 Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.611461 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688377 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688475 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688507 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688555 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688603 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.688621 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") pod \"8cfc8556-a69e-418c-b52e-de4b1baa474f\" (UID: \"8cfc8556-a69e-418c-b52e-de4b1baa474f\") " Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.690424 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs" (OuterVolumeSpecName: "logs") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.701313 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl" (OuterVolumeSpecName: "kube-api-access-mlpcl") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "kube-api-access-mlpcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.718363 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.774469 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.780391 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data" (OuterVolumeSpecName: "config-data") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790470 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8cfc8556-a69e-418c-b52e-de4b1baa474f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790501 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlpcl\" (UniqueName: \"kubernetes.io/projected/8cfc8556-a69e-418c-b52e-de4b1baa474f-kube-api-access-mlpcl\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790513 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790523 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.790531 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.809967 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "8cfc8556-a69e-418c-b52e-de4b1baa474f" (UID: "8cfc8556-a69e-418c-b52e-de4b1baa474f"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:23 crc kubenswrapper[4995]: I0126 23:34:23.892069 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/8cfc8556-a69e-418c-b52e-de4b1baa474f-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.269786 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"8cfc8556-a69e-418c-b52e-de4b1baa474f","Type":"ContainerDied","Data":"2b794c5d957bd49aebae6d5afbe71ffae4c423dc061005fe477c13b7c05312fd"} Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.269837 4995 scope.go:117] "RemoveContainer" containerID="00b844ad0368bdac37b62cc021cdfa035e9f04f04f9e4a86053c4753583ad2b4" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.269965 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275069 4995 generic.go:334] "Generic (PLEG): container finished" podID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerID="1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0" exitCode=0 Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275122 4995 generic.go:334] "Generic (PLEG): container finished" podID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerID="e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d" exitCode=143 Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275125 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerDied","Data":"1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0"} Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.275176 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerDied","Data":"e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d"} Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.314717 4995 scope.go:117] "RemoveContainer" containerID="a651eaf938bf7342c3b42b676d6d6e0269a84942ed71a9342b8438edbfad3533" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.319639 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.331561 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.519025 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.534388 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" path="/var/lib/kubelet/pods/8cfc8556-a69e-418c-b52e-de4b1baa474f/volumes" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615479 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615524 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615566 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615666 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615693 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.615727 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") pod \"2648dc76-5b29-4f04-817d-f0fdd488f830\" (UID: \"2648dc76-5b29-4f04-817d-f0fdd488f830\") " Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.616180 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs" (OuterVolumeSpecName: "logs") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.620315 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q" (OuterVolumeSpecName: "kube-api-access-xwr7q") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "kube-api-access-xwr7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.646367 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.648865 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.655310 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data" (OuterVolumeSpecName: "config-data") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.678339 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "2648dc76-5b29-4f04-817d-f0fdd488f830" (UID: "2648dc76-5b29-4f04-817d-f0fdd488f830"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717071 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwr7q\" (UniqueName: \"kubernetes.io/projected/2648dc76-5b29-4f04-817d-f0fdd488f830-kube-api-access-xwr7q\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717136 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717173 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717182 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717192 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2648dc76-5b29-4f04-817d-f0fdd488f830-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:24 crc kubenswrapper[4995]: I0126 23:34:24.717200 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2648dc76-5b29-4f04-817d-f0fdd488f830-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.292707 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-2" event={"ID":"2648dc76-5b29-4f04-817d-f0fdd488f830","Type":"ContainerDied","Data":"49fb0ec72b531eb14fdf7c505294e8fb258ee01215e4efb28e5a0951e6d10a5d"} Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.292775 4995 scope.go:117] "RemoveContainer" containerID="1e6e82e02a900f23b63f0dfe11e39fe3d2309a39b59b7f7c85cb350cdac04ae0" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.292953 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-2" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.334830 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.336092 4995 scope.go:117] "RemoveContainer" containerID="e9bb2055bcdf2712ecbb3c7daed756c651b2b4bf46893b70080bb192560a272d" Jan 26 23:34:25 crc kubenswrapper[4995]: I0126 23:34:25.346141 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-2"] Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.538675 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" path="/var/lib/kubelet/pods/2648dc76-5b29-4f04-817d-f0fdd488f830/volumes" Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.579512 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.579785 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" containerID="cri-o://697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" gracePeriod=30 Jan 26 23:34:26 crc kubenswrapper[4995]: I0126 23:34:26.580294 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" containerID="cri-o://92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" gracePeriod=30 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.321282 4995 generic.go:334] "Generic (PLEG): container finished" podID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" exitCode=143 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.321330 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerDied","Data":"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f"} Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.532740 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.196:9322/\": dial tcp 10.217.0.196:9322: connect: connection refused" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.532764 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.196:9322/\": dial tcp 10.217.0.196:9322: connect: connection refused" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.777910 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.788502 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-d4txg"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817564 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817874 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817886 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817894 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817901 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817915 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817922 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: E0126 23:34:27.817936 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.817941 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818128 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818148 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-kuttl-api-log" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818158 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cfc8556-a69e-418c-b52e-de4b1baa474f" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818165 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="2648dc76-5b29-4f04-817d-f0fdd488f830" containerName="watcher-api" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.818684 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.848747 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.889285 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.889682 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" containerID="cri-o://d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55" gracePeriod=30 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.912841 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.913158 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" containerID="cri-o://7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8" gracePeriod=30 Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.981871 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:27 crc kubenswrapper[4995]: I0126 23:34:27.981938 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.006620 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.083158 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.083863 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.085032 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.104719 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"watcher1555-account-delete-g8298\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.149093 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186020 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186201 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186299 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186802 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186909 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.187093 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") pod \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\" (UID: \"39b9a08a-84f3-4779-bc4c-1cf42869c99d\") " Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.186614 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs" (OuterVolumeSpecName: "logs") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.203264 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn" (OuterVolumeSpecName: "kube-api-access-9hvxn") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "kube-api-access-9hvxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.220405 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.231406 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.251500 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data" (OuterVolumeSpecName: "config-data") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.286388 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "39b9a08a-84f3-4779-bc4c-1cf42869c99d" (UID: "39b9a08a-84f3-4779-bc4c-1cf42869c99d"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288574 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288594 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288604 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39b9a08a-84f3-4779-bc4c-1cf42869c99d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288612 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288620 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/39b9a08a-84f3-4779-bc4c-1cf42869c99d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.288628 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hvxn\" (UniqueName: \"kubernetes.io/projected/39b9a08a-84f3-4779-bc4c-1cf42869c99d-kube-api-access-9hvxn\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337700 4995 generic.go:334] "Generic (PLEG): container finished" podID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" exitCode=0 Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337744 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerDied","Data":"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b"} Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337770 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"39b9a08a-84f3-4779-bc4c-1cf42869c99d","Type":"ContainerDied","Data":"3a6cd3187438302097e41f8997601ad4ed78776668fbf86e5b5c905b7d06906d"} Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337778 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.337787 4995 scope.go:117] "RemoveContainer" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.390396 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.392007 4995 scope.go:117] "RemoveContainer" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.399565 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.410573 4995 scope.go:117] "RemoveContainer" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" Jan 26 23:34:28 crc kubenswrapper[4995]: E0126 23:34:28.413352 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b\": container with ID starting with 92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b not found: ID does not exist" containerID="92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.413493 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b"} err="failed to get container status \"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b\": rpc error: code = NotFound desc = could not find container \"92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b\": container with ID starting with 92295b75da1db33e5da5399eae5a5d1954839101f800b8242c5d1cb4de219b2b not found: ID does not exist" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.413585 4995 scope.go:117] "RemoveContainer" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" Jan 26 23:34:28 crc kubenswrapper[4995]: E0126 23:34:28.413949 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f\": container with ID starting with 697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f not found: ID does not exist" containerID="697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.413985 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f"} err="failed to get container status \"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f\": rpc error: code = NotFound desc = could not find container \"697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f\": container with ID starting with 697aaf9282ffad36ea0ce7cc4d18695c8d538a946caf88303e3ea751d5fe671f not found: ID does not exist" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.526158 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" path="/var/lib/kubelet/pods/39b9a08a-84f3-4779-bc4c-1cf42869c99d/volumes" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.526729 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b1297fe-4233-44a3-864c-2564bef1017f" path="/var/lib/kubelet/pods/9b1297fe-4233-44a3-864c-2564bef1017f/volumes" Jan 26 23:34:28 crc kubenswrapper[4995]: I0126 23:34:28.626613 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:28 crc kubenswrapper[4995]: W0126 23:34:28.628634 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47c8ef00_c407_45a9_bc09_b975263baccf.slice/crio-61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc WatchSource:0}: Error finding container 61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc: Status 404 returned error can't find the container with id 61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc Jan 26 23:34:29 crc kubenswrapper[4995]: I0126 23:34:29.347074 4995 generic.go:334] "Generic (PLEG): container finished" podID="47c8ef00-c407-45a9-bc09-b975263baccf" containerID="25c8d3c2991d69a5a3326fb481b95cc7b754074c8cad3e82a6126d4dff723e1b" exitCode=0 Jan 26 23:34:29 crc kubenswrapper[4995]: I0126 23:34:29.347169 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" event={"ID":"47c8ef00-c407-45a9-bc09-b975263baccf","Type":"ContainerDied","Data":"25c8d3c2991d69a5a3326fb481b95cc7b754074c8cad3e82a6126d4dff723e1b"} Jan 26 23:34:29 crc kubenswrapper[4995]: I0126 23:34:29.347484 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" event={"ID":"47c8ef00-c407-45a9-bc09-b975263baccf","Type":"ContainerStarted","Data":"61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc"} Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167461 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167852 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" containerID="cri-o://e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167944 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" containerID="cri-o://3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167950 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" containerID="cri-o://5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.167969 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" containerID="cri-o://7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e" gracePeriod=30 Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.234485 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.674514 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.825920 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") pod \"47c8ef00-c407-45a9-bc09-b975263baccf\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.826162 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") pod \"47c8ef00-c407-45a9-bc09-b975263baccf\" (UID: \"47c8ef00-c407-45a9-bc09-b975263baccf\") " Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.826755 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47c8ef00-c407-45a9-bc09-b975263baccf" (UID: "47c8ef00-c407-45a9-bc09-b975263baccf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.832614 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf" (OuterVolumeSpecName: "kube-api-access-xq6kf") pod "47c8ef00-c407-45a9-bc09-b975263baccf" (UID: "47c8ef00-c407-45a9-bc09-b975263baccf"). InnerVolumeSpecName "kube-api-access-xq6kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.927934 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47c8ef00-c407-45a9-bc09-b975263baccf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:30 crc kubenswrapper[4995]: I0126 23:34:30.927969 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq6kf\" (UniqueName: \"kubernetes.io/projected/47c8ef00-c407-45a9-bc09-b975263baccf-kube-api-access-xq6kf\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.151542 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:31 crc kubenswrapper[4995]: E0126 23:34:31.152166 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" containerName="mariadb-account-delete" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152185 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" containerName="mariadb-account-delete" Jan 26 23:34:31 crc kubenswrapper[4995]: E0126 23:34:31.152213 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152222 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" Jan 26 23:34:31 crc kubenswrapper[4995]: E0126 23:34:31.152240 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152250 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152431 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-kuttl-api-log" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152447 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" containerName="mariadb-account-delete" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.152463 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="39b9a08a-84f3-4779-bc4c-1cf42869c99d" containerName="watcher-api" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.153844 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.180278 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.233525 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.233818 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.233973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.335488 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.335583 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.335685 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.336192 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.336261 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.356151 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"certified-operators-kjkxx\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.371936 4995 generic.go:334] "Generic (PLEG): container finished" podID="7fd4c263-1050-4645-a224-8e1f758e4495" containerID="d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55" exitCode=0 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.372050 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerDied","Data":"d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.375438 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" event={"ID":"47c8ef00-c407-45a9-bc09-b975263baccf","Type":"ContainerDied","Data":"61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.375473 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ac876aec7923c87032ab4c1bd8c1224e114529b9ad4cb19fcda7693302b0dc" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.375524 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher1555-account-delete-g8298" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380203 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3" exitCode=0 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380236 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1" exitCode=2 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380247 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda" exitCode=0 Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380267 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380293 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.380306 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda"} Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.497043 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.587243 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747562 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747855 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747896 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.747955 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.748000 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") pod \"7fd4c263-1050-4645-a224-8e1f758e4495\" (UID: \"7fd4c263-1050-4645-a224-8e1f758e4495\") " Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.748442 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs" (OuterVolumeSpecName: "logs") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.759959 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w" (OuterVolumeSpecName: "kube-api-access-tn67w") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "kube-api-access-tn67w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.780083 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.810179 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data" (OuterVolumeSpecName: "config-data") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.831230 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7fd4c263-1050-4645-a224-8e1f758e4495" (UID: "7fd4c263-1050-4645-a224-8e1f758e4495"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849388 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn67w\" (UniqueName: \"kubernetes.io/projected/7fd4c263-1050-4645-a224-8e1f758e4495-kube-api-access-tn67w\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849427 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849437 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849446 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7fd4c263-1050-4645-a224-8e1f758e4495-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:31 crc kubenswrapper[4995]: I0126 23:34:31.849456 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fd4c263-1050-4645-a224-8e1f758e4495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.022288 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.388399 4995 generic.go:334] "Generic (PLEG): container finished" podID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" exitCode=0 Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.388450 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.388699 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerStarted","Data":"adc7d9abe152f60e5a599fd53bf316012dacec6f8f8be6b6961feea31585f3d6"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.389799 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.390407 4995 generic.go:334] "Generic (PLEG): container finished" podID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerID="7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8" exitCode=0 Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.390479 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerDied","Data":"7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.392557 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"7fd4c263-1050-4645-a224-8e1f758e4495","Type":"ContainerDied","Data":"dae048464ca14139239006ce1ddddc5b74e74d486974462fe0bfd5796420c08e"} Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.392582 4995 scope.go:117] "RemoveContainer" containerID="d19632ddd195db4ccb4d1fec947e424c4ea9433d0900fd8944957a701581ae55" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.392946 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.431237 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.439660 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.498985 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.547069 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" path="/var/lib/kubelet/pods/7fd4c263-1050-4645-a224-8e1f758e4495/volumes" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661708 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661791 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661814 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661859 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.661925 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") pod \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\" (UID: \"fd4eea70-3af8-412b-8a7f-8abda2350f7a\") " Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.663473 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs" (OuterVolumeSpecName: "logs") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.667084 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l" (OuterVolumeSpecName: "kube-api-access-jmr4l") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "kube-api-access-jmr4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.684859 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.696172 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.739036 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data" (OuterVolumeSpecName: "config-data") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.745935 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "fd4eea70-3af8-412b-8a7f-8abda2350f7a" (UID: "fd4eea70-3af8-412b-8a7f-8abda2350f7a"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.763526 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764419 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmr4l\" (UniqueName: \"kubernetes.io/projected/fd4eea70-3af8-412b-8a7f-8abda2350f7a-kube-api-access-jmr4l\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764760 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764846 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd4eea70-3af8-412b-8a7f-8abda2350f7a-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.764929 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.765001 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd4eea70-3af8-412b-8a7f-8abda2350f7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.837313 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.843190 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-c64b2"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.853150 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.859706 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.866883 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher1555-account-delete-g8298"] Jan 26 23:34:32 crc kubenswrapper[4995]: I0126 23:34:32.871641 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-1555-account-create-update-j8dp6"] Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.402564 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerStarted","Data":"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389"} Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.404546 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"fd4eea70-3af8-412b-8a7f-8abda2350f7a","Type":"ContainerDied","Data":"cbea931bd0838e2c97cfcebadf6458ddc41bc04afdf7d81934ae7c0566e45a9b"} Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.404603 4995 scope.go:117] "RemoveContainer" containerID="7ba57781504f7092ac75ef403a28945ae13079b33c156708e6f728cfe78e77e8" Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.404756 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.461157 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:33 crc kubenswrapper[4995]: I0126 23:34:33.467558 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051406 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:34:34 crc kubenswrapper[4995]: E0126 23:34:34.051710 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051721 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" Jan 26 23:34:34 crc kubenswrapper[4995]: E0126 23:34:34.051740 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051746 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051889 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd4c263-1050-4645-a224-8e1f758e4495" containerName="watcher-applier" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.051910 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" containerName="watcher-decision-engine" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.052434 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.063982 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.092397 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.093447 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.099156 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.111876 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.192957 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.193034 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.193059 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.193117 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295042 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295129 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295152 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295180 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.295981 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.296070 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.314815 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"watcher-0966-account-create-update-wjl7d\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.328537 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"watcher-db-create-tddnh\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.406209 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.426120 4995 generic.go:334] "Generic (PLEG): container finished" podID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" exitCode=0 Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.426219 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389"} Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.432814 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.532171 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47c8ef00-c407-45a9-bc09-b975263baccf" path="/var/lib/kubelet/pods/47c8ef00-c407-45a9-bc09-b975263baccf/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.533856 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599bdb97-9d21-44b9-9a59-84320b1c4a6e" path="/var/lib/kubelet/pods/599bdb97-9d21-44b9-9a59-84320b1c4a6e/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.536723 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2f73d1-0380-4fcf-9fde-35f821426fed" path="/var/lib/kubelet/pods/ca2f73d1-0380-4fcf-9fde-35f821426fed/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.537472 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4eea70-3af8-412b-8a7f-8abda2350f7a" path="/var/lib/kubelet/pods/fd4eea70-3af8-412b-8a7f-8abda2350f7a/volumes" Jan 26 23:34:34 crc kubenswrapper[4995]: W0126 23:34:34.916672 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd487adb0_ddf0_4932_9fad_09dfb2de1d00.slice/crio-aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5 WatchSource:0}: Error finding container aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5: Status 404 returned error can't find the container with id aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5 Jan 26 23:34:34 crc kubenswrapper[4995]: I0126 23:34:34.918922 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.056750 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.446923 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerStarted","Data":"05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.447214 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerStarted","Data":"57e1113b141e5cdc90ca2c6a555835e026eae246413475b254aa598c9e83e8c8"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.456203 4995 generic.go:334] "Generic (PLEG): container finished" podID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerID="7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e" exitCode=0 Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.456257 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.458002 4995 generic.go:334] "Generic (PLEG): container finished" podID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerID="cd3358a0ea8ceaa10989cd97ffca9dfefbbb82795be31ea1a44850cfa67b5055" exitCode=0 Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.458042 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tddnh" event={"ID":"d487adb0-ddf0-4932-9fad-09dfb2de1d00","Type":"ContainerDied","Data":"cd3358a0ea8ceaa10989cd97ffca9dfefbbb82795be31ea1a44850cfa67b5055"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.458059 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tddnh" event={"ID":"d487adb0-ddf0-4932-9fad-09dfb2de1d00","Type":"ContainerStarted","Data":"aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.460417 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerStarted","Data":"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e"} Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.466765 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" podStartSLOduration=1.466749766 podStartE2EDuration="1.466749766s" podCreationTimestamp="2026-01-26 23:34:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:35.462789367 +0000 UTC m=+1579.627496832" watchObservedRunningTime="2026-01-26 23:34:35.466749766 +0000 UTC m=+1579.631457231" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.490510 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kjkxx" podStartSLOduration=2.004854938 podStartE2EDuration="4.490492592s" podCreationTimestamp="2026-01-26 23:34:31 +0000 UTC" firstStartedPulling="2026-01-26 23:34:32.389589427 +0000 UTC m=+1576.554296892" lastFinishedPulling="2026-01-26 23:34:34.875227081 +0000 UTC m=+1579.039934546" observedRunningTime="2026-01-26 23:34:35.486551903 +0000 UTC m=+1579.651259378" watchObservedRunningTime="2026-01-26 23:34:35.490492592 +0000 UTC m=+1579.655200067" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.729456 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839384 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839442 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839496 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839578 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839600 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839680 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839726 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839760 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") pod \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\" (UID: \"a3386474-d50c-4dcf-b6b5-9aae87610ee5\") " Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839818 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.839999 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.840200 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.840228 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3386474-d50c-4dcf-b6b5-9aae87610ee5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.845416 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w" (OuterVolumeSpecName: "kube-api-access-d5j7w") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "kube-api-access-d5j7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.846630 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts" (OuterVolumeSpecName: "scripts") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.879470 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.917662 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.934397 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data" (OuterVolumeSpecName: "config-data") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.935633 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a3386474-d50c-4dcf-b6b5-9aae87610ee5" (UID: "a3386474-d50c-4dcf-b6b5-9aae87610ee5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944187 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944220 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944233 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944247 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5j7w\" (UniqueName: \"kubernetes.io/projected/a3386474-d50c-4dcf-b6b5-9aae87610ee5-kube-api-access-d5j7w\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944261 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:35 crc kubenswrapper[4995]: I0126 23:34:35.944272 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3386474-d50c-4dcf-b6b5-9aae87610ee5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.500058 4995 generic.go:334] "Generic (PLEG): container finished" podID="32a18765-f113-401c-850b-e585b2f3bd59" containerID="05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52" exitCode=0 Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.500138 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerDied","Data":"05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52"} Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.503054 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"a3386474-d50c-4dcf-b6b5-9aae87610ee5","Type":"ContainerDied","Data":"464dc0604caf674cfc5cd0b86de7eea3f3ee7745d7ecd919bcd18bc051110f62"} Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.503076 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.503128 4995 scope.go:117] "RemoveContainer" containerID="3ba1f75aff84b3911ed9b4b0c5a01c12ee8d7a0011e88c10d24956806d412ce3" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.540640 4995 scope.go:117] "RemoveContainer" containerID="5d8eb0fafb47003f7b3d91ad0b8cd1cfe249d5534930cfb4d031a36317e1a5a1" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.566787 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.580362 4995 scope.go:117] "RemoveContainer" containerID="7d15cca2bc1baf6063b034c732082ceda61f3e8a8fa3faca8867cf61c611773e" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.580523 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.588982 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589383 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589403 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589436 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589447 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589467 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589475 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" Jan 26 23:34:36 crc kubenswrapper[4995]: E0126 23:34:36.589488 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589496 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589703 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-central-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589722 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="sg-core" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589736 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="ceilometer-notification-agent" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.589749 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" containerName="proxy-httpd" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.591486 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.594029 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.594606 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.594981 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.595408 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.617672 4995 scope.go:117] "RemoveContainer" containerID="e39cd8d3b2d8dc5768ce6e0e2ae2c899a43d8ff5921753135b3150a977d5edda" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758590 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758865 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758892 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758949 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.758969 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.759117 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.759155 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.759286 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860674 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860731 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860813 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860852 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860877 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860899 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860930 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.860951 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.862794 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.864004 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.864871 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.866570 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.872724 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.873492 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.874464 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.881483 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"ceilometer-0\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.910850 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:36 crc kubenswrapper[4995]: I0126 23:34:36.925755 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.062709 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") pod \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.062884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") pod \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\" (UID: \"d487adb0-ddf0-4932-9fad-09dfb2de1d00\") " Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.063698 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d487adb0-ddf0-4932-9fad-09dfb2de1d00" (UID: "d487adb0-ddf0-4932-9fad-09dfb2de1d00"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.071489 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx" (OuterVolumeSpecName: "kube-api-access-qqltx") pod "d487adb0-ddf0-4932-9fad-09dfb2de1d00" (UID: "d487adb0-ddf0-4932-9fad-09dfb2de1d00"). InnerVolumeSpecName "kube-api-access-qqltx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.164434 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d487adb0-ddf0-4932-9fad-09dfb2de1d00-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.164471 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqltx\" (UniqueName: \"kubernetes.io/projected/d487adb0-ddf0-4932-9fad-09dfb2de1d00-kube-api-access-qqltx\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.390831 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.517869 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-tddnh" event={"ID":"d487adb0-ddf0-4932-9fad-09dfb2de1d00","Type":"ContainerDied","Data":"aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5"} Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.517938 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aebc051a6c323dd72858e998286e44cb05dda694c003abf80e1c823776a8f8e5" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.518061 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-tddnh" Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.526690 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"a00645c5e1dd09271e74863e0e5c91226b9b85c9d1bb4a0367151708e8674b54"} Jan 26 23:34:37 crc kubenswrapper[4995]: I0126 23:34:37.970316 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.078122 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") pod \"32a18765-f113-401c-850b-e585b2f3bd59\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.078237 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") pod \"32a18765-f113-401c-850b-e585b2f3bd59\" (UID: \"32a18765-f113-401c-850b-e585b2f3bd59\") " Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.082899 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4" (OuterVolumeSpecName: "kube-api-access-vj2q4") pod "32a18765-f113-401c-850b-e585b2f3bd59" (UID: "32a18765-f113-401c-850b-e585b2f3bd59"). InnerVolumeSpecName "kube-api-access-vj2q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.083230 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "32a18765-f113-401c-850b-e585b2f3bd59" (UID: "32a18765-f113-401c-850b-e585b2f3bd59"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.180231 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/32a18765-f113-401c-850b-e585b2f3bd59-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.180277 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vj2q4\" (UniqueName: \"kubernetes.io/projected/32a18765-f113-401c-850b-e585b2f3bd59-kube-api-access-vj2q4\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.527831 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3386474-d50c-4dcf-b6b5-9aae87610ee5" path="/var/lib/kubelet/pods/a3386474-d50c-4dcf-b6b5-9aae87610ee5/volumes" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.538223 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f"} Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.540640 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" event={"ID":"32a18765-f113-401c-850b-e585b2f3bd59","Type":"ContainerDied","Data":"57e1113b141e5cdc90ca2c6a555835e026eae246413475b254aa598c9e83e8c8"} Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.540677 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57e1113b141e5cdc90ca2c6a555835e026eae246413475b254aa598c9e83e8c8" Jan 26 23:34:38 crc kubenswrapper[4995]: I0126 23:34:38.540739 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-0966-account-create-update-wjl7d" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.298313 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:34:39 crc kubenswrapper[4995]: E0126 23:34:39.298932 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerName="mariadb-database-create" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.298950 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerName="mariadb-database-create" Jan 26 23:34:39 crc kubenswrapper[4995]: E0126 23:34:39.298969 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a18765-f113-401c-850b-e585b2f3bd59" containerName="mariadb-account-create-update" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.298979 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a18765-f113-401c-850b-e585b2f3bd59" containerName="mariadb-account-create-update" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.299134 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a18765-f113-401c-850b-e585b2f3bd59" containerName="mariadb-account-create-update" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.299159 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" containerName="mariadb-database-create" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.299655 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.302425 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cc6mk" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.303232 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.311062 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397618 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397721 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397784 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.397942 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499515 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499643 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499679 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.499704 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.503474 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.504273 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.504718 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.516020 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"watcher-kuttl-db-sync-b8hbl\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.548887 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2"} Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.550607 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502"} Jan 26 23:34:39 crc kubenswrapper[4995]: I0126 23:34:39.613739 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.069412 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.557307 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerStarted","Data":"bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b"} Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.557613 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerStarted","Data":"9f41623d1d6a33bc57b262e2ed5931e173521fb5ada02efe68c0474e3e48c050"} Jan 26 23:34:40 crc kubenswrapper[4995]: I0126 23:34:40.572346 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" podStartSLOduration=1.572331337 podStartE2EDuration="1.572331337s" podCreationTimestamp="2026-01-26 23:34:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:40.56928359 +0000 UTC m=+1584.733991065" watchObservedRunningTime="2026-01-26 23:34:40.572331337 +0000 UTC m=+1584.737038802" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.497764 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.498063 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.544375 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.570310 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerStarted","Data":"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808"} Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.597624 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.621973284 podStartE2EDuration="5.597607645s" podCreationTimestamp="2026-01-26 23:34:36 +0000 UTC" firstStartedPulling="2026-01-26 23:34:37.395926874 +0000 UTC m=+1581.560634359" lastFinishedPulling="2026-01-26 23:34:40.371561245 +0000 UTC m=+1584.536268720" observedRunningTime="2026-01-26 23:34:41.593873581 +0000 UTC m=+1585.758581096" watchObservedRunningTime="2026-01-26 23:34:41.597607645 +0000 UTC m=+1585.762315100" Jan 26 23:34:41 crc kubenswrapper[4995]: I0126 23:34:41.640151 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:42 crc kubenswrapper[4995]: I0126 23:34:42.579061 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:34:43 crc kubenswrapper[4995]: I0126 23:34:43.589476 4995 generic.go:334] "Generic (PLEG): container finished" podID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerID="bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b" exitCode=0 Jan 26 23:34:43 crc kubenswrapper[4995]: I0126 23:34:43.589555 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerDied","Data":"bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b"} Jan 26 23:34:44 crc kubenswrapper[4995]: I0126 23:34:44.999000 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083788 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083862 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083917 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.083959 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") pod \"3b5f9d2a-2291-4153-8d71-602f827fd381\" (UID: \"3b5f9d2a-2291-4153-8d71-602f827fd381\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.100316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt" (OuterVolumeSpecName: "kube-api-access-wg4wt") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "kube-api-access-wg4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.108210 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.133270 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.140326 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data" (OuterVolumeSpecName: "config-data") pod "3b5f9d2a-2291-4153-8d71-602f827fd381" (UID: "3b5f9d2a-2291-4153-8d71-602f827fd381"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.172650 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.172876 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kjkxx" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" containerID="cri-o://f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" gracePeriod=2 Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.186982 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.187021 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.187035 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg4wt\" (UniqueName: \"kubernetes.io/projected/3b5f9d2a-2291-4153-8d71-602f827fd381-kube-api-access-wg4wt\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.187047 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b5f9d2a-2291-4153-8d71-602f827fd381-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.516226 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.594002 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") pod \"88605b61-373f-4ead-b09a-9aeda8950ab0\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.594051 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") pod \"88605b61-373f-4ead-b09a-9aeda8950ab0\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.594176 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") pod \"88605b61-373f-4ead-b09a-9aeda8950ab0\" (UID: \"88605b61-373f-4ead-b09a-9aeda8950ab0\") " Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.595829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities" (OuterVolumeSpecName: "utilities") pod "88605b61-373f-4ead-b09a-9aeda8950ab0" (UID: "88605b61-373f-4ead-b09a-9aeda8950ab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.598857 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv" (OuterVolumeSpecName: "kube-api-access-d8ncv") pod "88605b61-373f-4ead-b09a-9aeda8950ab0" (UID: "88605b61-373f-4ead-b09a-9aeda8950ab0"). InnerVolumeSpecName "kube-api-access-d8ncv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.611822 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" event={"ID":"3b5f9d2a-2291-4153-8d71-602f827fd381","Type":"ContainerDied","Data":"9f41623d1d6a33bc57b262e2ed5931e173521fb5ada02efe68c0474e3e48c050"} Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.611856 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f41623d1d6a33bc57b262e2ed5931e173521fb5ada02efe68c0474e3e48c050" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.611918 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620302 4995 generic.go:334] "Generic (PLEG): container finished" podID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" exitCode=0 Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620339 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e"} Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620365 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjkxx" event={"ID":"88605b61-373f-4ead-b09a-9aeda8950ab0","Type":"ContainerDied","Data":"adc7d9abe152f60e5a599fd53bf316012dacec6f8f8be6b6961feea31585f3d6"} Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620383 4995 scope.go:117] "RemoveContainer" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.620513 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjkxx" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.662072 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88605b61-373f-4ead-b09a-9aeda8950ab0" (UID: "88605b61-373f-4ead-b09a-9aeda8950ab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.690939 4995 scope.go:117] "RemoveContainer" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.696084 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.696157 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88605b61-373f-4ead-b09a-9aeda8950ab0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.696186 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ncv\" (UniqueName: \"kubernetes.io/projected/88605b61-373f-4ead-b09a-9aeda8950ab0-kube-api-access-d8ncv\") on node \"crc\" DevicePath \"\"" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.714107 4995 scope.go:117] "RemoveContainer" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.729724 4995 scope.go:117] "RemoveContainer" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.730428 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e\": container with ID starting with f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e not found: ID does not exist" containerID="f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730464 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e"} err="failed to get container status \"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e\": rpc error: code = NotFound desc = could not find container \"f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e\": container with ID starting with f5bd6adf734c3f17957dbe9fe707e8f8f3f1cce206cd9f3c203989467f7a9a5e not found: ID does not exist" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730488 4995 scope.go:117] "RemoveContainer" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.730840 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389\": container with ID starting with 69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389 not found: ID does not exist" containerID="69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730898 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389"} err="failed to get container status \"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389\": rpc error: code = NotFound desc = could not find container \"69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389\": container with ID starting with 69f033d99f37f484d9d44407df81d881c693a70398ca6bcb75119d3592075389 not found: ID does not exist" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.730937 4995 scope.go:117] "RemoveContainer" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.731310 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e\": container with ID starting with b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e not found: ID does not exist" containerID="b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.731344 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e"} err="failed to get container status \"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e\": rpc error: code = NotFound desc = could not find container \"b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e\": container with ID starting with b13fc23467127fb54ca926161c407edc0168ec05840d6e9c0fd4a80445bc216e not found: ID does not exist" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883168 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883472 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-utilities" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883488 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-utilities" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883512 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883520 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883532 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-content" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883539 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="extract-content" Jan 26 23:34:45 crc kubenswrapper[4995]: E0126 23:34:45.883556 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerName="watcher-kuttl-db-sync" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883561 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerName="watcher-kuttl-db-sync" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883714 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" containerName="watcher-kuttl-db-sync" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.883734 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" containerName="registry-server" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.884357 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.888627 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-cc6mk" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.889146 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.899338 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.951108 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.952359 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.957209 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.972792 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.981156 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.985528 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:45 crc kubenswrapper[4995]: I0126 23:34:45.988606 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001641 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001706 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001723 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001760 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.001809 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.018785 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.027020 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.035259 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kjkxx"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.102860 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103137 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103262 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103331 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103437 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103836 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103935 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.103797 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104013 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104226 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104333 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104419 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104524 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104861 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.104973 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.105040 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.105141 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.105504 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.108239 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.108560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.108773 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.124788 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"watcher-kuttl-applier-0\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.206840 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207148 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207169 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207210 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207242 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207285 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207307 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207322 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207350 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207366 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207386 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.207399 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.208232 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.208505 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.210685 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.210719 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.211745 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.212615 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.215722 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.216089 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.222738 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.223741 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.224227 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"watcher-kuttl-api-0\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.224657 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.265553 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.287089 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.301792 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.531007 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88605b61-373f-4ead-b09a-9aeda8950ab0" path="/var/lib/kubelet/pods/88605b61-373f-4ead-b09a-9aeda8950ab0/volumes" Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.764230 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.856819 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:34:46 crc kubenswrapper[4995]: I0126 23:34:46.899005 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.643271 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerStarted","Data":"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.643561 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerStarted","Data":"b7111921d0bcb4ece6cd10fa5e18b18895898ed7aa2249d33d860e1754a300c5"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650491 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerStarted","Data":"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650544 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerStarted","Data":"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650559 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerStarted","Data":"a2912924b0f5fcf0004fb3575adbc36625d7116187c79bb884ef553334908c42"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.650713 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.652398 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerStarted","Data":"e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.652442 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerStarted","Data":"a4ce0a9663c549496780173c4daf62f761575e14888d812de2623b4acc727c19"} Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.664681 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.664662602 podStartE2EDuration="2.664662602s" podCreationTimestamp="2026-01-26 23:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:47.661889763 +0000 UTC m=+1591.826597228" watchObservedRunningTime="2026-01-26 23:34:47.664662602 +0000 UTC m=+1591.829370067" Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.691244 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.69122729 podStartE2EDuration="2.69122729s" podCreationTimestamp="2026-01-26 23:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:47.686904421 +0000 UTC m=+1591.851611886" watchObservedRunningTime="2026-01-26 23:34:47.69122729 +0000 UTC m=+1591.855934755" Jan 26 23:34:47 crc kubenswrapper[4995]: I0126 23:34:47.711976 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.71195182 podStartE2EDuration="2.71195182s" podCreationTimestamp="2026-01-26 23:34:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:34:47.706058812 +0000 UTC m=+1591.870766277" watchObservedRunningTime="2026-01-26 23:34:47.71195182 +0000 UTC m=+1591.876659285" Jan 26 23:34:50 crc kubenswrapper[4995]: I0126 23:34:50.159792 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:51 crc kubenswrapper[4995]: I0126 23:34:51.266503 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:51 crc kubenswrapper[4995]: I0126 23:34:51.288980 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.266382 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.289608 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.302546 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.312907 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.353069 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.394654 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.754839 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.763716 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.785643 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:34:56 crc kubenswrapper[4995]: I0126 23:34:56.806791 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.962173 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.962709 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" containerID="cri-o://6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.964538 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" containerID="cri-o://fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.964662 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" containerID="cri-o://2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.964801 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" containerID="cri-o://c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" gracePeriod=30 Jan 26 23:34:57 crc kubenswrapper[4995]: I0126 23:34:57.975779 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.205:3000/\": EOF" Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775863 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" exitCode=0 Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775907 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" exitCode=2 Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775923 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" exitCode=0 Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.775967 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808"} Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.776018 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2"} Jan 26 23:34:58 crc kubenswrapper[4995]: I0126 23:34:58.776041 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f"} Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.573323 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653773 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653844 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653910 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653953 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.653990 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.654062 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.654168 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.654206 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") pod \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\" (UID: \"0e1b3fa8-47bf-4484-98a7-b131e9bed123\") " Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.656606 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.656737 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.660309 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz" (OuterVolumeSpecName: "kube-api-access-4bgkz") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "kube-api-access-4bgkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.673921 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts" (OuterVolumeSpecName: "scripts") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.689998 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.713880 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.732751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.739350 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data" (OuterVolumeSpecName: "config-data") pod "0e1b3fa8-47bf-4484-98a7-b131e9bed123" (UID: "0e1b3fa8-47bf-4484-98a7-b131e9bed123"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756315 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756352 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756370 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756381 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756392 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756402 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0e1b3fa8-47bf-4484-98a7-b131e9bed123-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756413 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0e1b3fa8-47bf-4484-98a7-b131e9bed123-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.756424 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bgkz\" (UniqueName: \"kubernetes.io/projected/0e1b3fa8-47bf-4484-98a7-b131e9bed123-kube-api-access-4bgkz\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793739 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" exitCode=0 Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793776 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502"} Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"0e1b3fa8-47bf-4484-98a7-b131e9bed123","Type":"ContainerDied","Data":"a00645c5e1dd09271e74863e0e5c91226b9b85c9d1bb4a0367151708e8674b54"} Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793824 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.793849 4995 scope.go:117] "RemoveContainer" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.814655 4995 scope.go:117] "RemoveContainer" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.828371 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.837337 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.841439 4995 scope.go:117] "RemoveContainer" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.853718 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.861503 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.861853 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.861945 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862016 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.862129 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862203 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.862281 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862363 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862713 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-notification-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862800 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="sg-core" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862878 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="ceilometer-central-agent" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.862960 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" containerName="proxy-httpd" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.865076 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.867389 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.870400 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.870505 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.870677 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.903233 4995 scope.go:117] "RemoveContainer" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.927391 4995 scope.go:117] "RemoveContainer" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.928061 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808\": container with ID starting with fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808 not found: ID does not exist" containerID="fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928112 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808"} err="failed to get container status \"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808\": rpc error: code = NotFound desc = could not find container \"fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808\": container with ID starting with fb45c07fd7d24df63d6985314a27e52c8fe3ae90a0860de89c4156c40c213808 not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928135 4995 scope.go:117] "RemoveContainer" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.928371 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2\": container with ID starting with c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2 not found: ID does not exist" containerID="c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928432 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2"} err="failed to get container status \"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2\": rpc error: code = NotFound desc = could not find container \"c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2\": container with ID starting with c50dc290d16b69469afab996a6bb22e01de1d6e42bb7ecb691b52275c05f3eb2 not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928448 4995 scope.go:117] "RemoveContainer" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.928834 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502\": container with ID starting with 2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502 not found: ID does not exist" containerID="2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928859 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502"} err="failed to get container status \"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502\": rpc error: code = NotFound desc = could not find container \"2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502\": container with ID starting with 2debb696278d8510b5af0f26b0261dabd0fb1e9293a639fe6c0170991e1f8502 not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.928877 4995 scope.go:117] "RemoveContainer" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" Jan 26 23:35:00 crc kubenswrapper[4995]: E0126 23:35:00.929064 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f\": container with ID starting with 6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f not found: ID does not exist" containerID="6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.929093 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f"} err="failed to get container status \"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f\": rpc error: code = NotFound desc = could not find container \"6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f\": container with ID starting with 6919fb008890b37a303883f797795461f7b895d65e9557a1fac399fdad90907f not found: ID does not exist" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964373 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964507 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964560 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964640 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964727 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964766 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964832 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:00 crc kubenswrapper[4995]: I0126 23:35:00.964861 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066714 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066773 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066844 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066886 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066907 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066943 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.066984 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.067008 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.067280 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.067495 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.070506 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.072055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.074211 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.076271 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.082961 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.087045 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.189460 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.685728 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:01 crc kubenswrapper[4995]: I0126 23:35:01.804523 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"a273f23b5e6b3152076243b8eb373d6a89966d29af4ec92e1e40164b0324f64f"} Jan 26 23:35:02 crc kubenswrapper[4995]: I0126 23:35:02.527396 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1b3fa8-47bf-4484-98a7-b131e9bed123" path="/var/lib/kubelet/pods/0e1b3fa8-47bf-4484-98a7-b131e9bed123/volumes" Jan 26 23:35:02 crc kubenswrapper[4995]: I0126 23:35:02.814608 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} Jan 26 23:35:03 crc kubenswrapper[4995]: I0126 23:35:03.826369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} Jan 26 23:35:03 crc kubenswrapper[4995]: I0126 23:35:03.826935 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.088173 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.096892 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-b8hbl"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.129758 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.130729 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.148212 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.196131 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.208978 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" containerID="cri-o://6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.233760 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.233837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.269538 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.269798 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" containerID="cri-o://e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.284815 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.285050 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" containerID="cri-o://1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.285450 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" containerID="cri-o://52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" gracePeriod=30 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.334871 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.334929 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.335782 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.360985 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"watcher0966-account-delete-7lmgj\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.444693 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.526068 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5f9d2a-2291-4153-8d71-602f827fd381" path="/var/lib/kubelet/pods/3b5f9d2a-2291-4153-8d71-602f827fd381/volumes" Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.838749 4995 generic.go:334] "Generic (PLEG): container finished" podID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" exitCode=143 Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.838800 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerDied","Data":"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1"} Jan 26 23:35:04 crc kubenswrapper[4995]: I0126 23:35:04.892553 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.548492 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.654906 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.654945 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655050 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655067 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655122 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.655136 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") pod \"f2bc50a4-5dd7-42df-9279-4d07dd760275\" (UID: \"f2bc50a4-5dd7-42df-9279-4d07dd760275\") " Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.656256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs" (OuterVolumeSpecName: "logs") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.663415 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp" (OuterVolumeSpecName: "kube-api-access-2ljtp") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "kube-api-access-2ljtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.685778 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.702847 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data" (OuterVolumeSpecName: "config-data") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.707191 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.725267 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "f2bc50a4-5dd7-42df-9279-4d07dd760275" (UID: "f2bc50a4-5dd7-42df-9279-4d07dd760275"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757167 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757196 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757206 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ljtp\" (UniqueName: \"kubernetes.io/projected/f2bc50a4-5dd7-42df-9279-4d07dd760275-kube-api-access-2ljtp\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757215 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757224 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bc50a4-5dd7-42df-9279-4d07dd760275-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.757232 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/f2bc50a4-5dd7-42df-9279-4d07dd760275-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850242 4995 generic.go:334] "Generic (PLEG): container finished" podID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" exitCode=0 Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850324 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerDied","Data":"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850345 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850369 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"f2bc50a4-5dd7-42df-9279-4d07dd760275","Type":"ContainerDied","Data":"a2912924b0f5fcf0004fb3575adbc36625d7116187c79bb884ef553334908c42"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.850392 4995 scope.go:117] "RemoveContainer" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.854466 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerStarted","Data":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.855722 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.859646 4995 generic.go:334] "Generic (PLEG): container finished" podID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerID="404080cef7718114d3ef40681ba2896d4b0b7f3fac87f1f21efcf7b7105e0285" exitCode=0 Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.859705 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" event={"ID":"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd","Type":"ContainerDied","Data":"404080cef7718114d3ef40681ba2896d4b0b7f3fac87f1f21efcf7b7105e0285"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.859738 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" event={"ID":"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd","Type":"ContainerStarted","Data":"3e46777b83ef7137b672d4d002a9a57f36054fe8886eaa98beed4e83da6fa179"} Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.884401 4995 scope.go:117] "RemoveContainer" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.888959 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.625176278 podStartE2EDuration="5.888939893s" podCreationTimestamp="2026-01-26 23:35:00 +0000 UTC" firstStartedPulling="2026-01-26 23:35:01.684135285 +0000 UTC m=+1605.848842760" lastFinishedPulling="2026-01-26 23:35:04.94789891 +0000 UTC m=+1609.112606375" observedRunningTime="2026-01-26 23:35:05.875753251 +0000 UTC m=+1610.040460716" watchObservedRunningTime="2026-01-26 23:35:05.888939893 +0000 UTC m=+1610.053647358" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.898583 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.904949 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.915722 4995 scope.go:117] "RemoveContainer" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" Jan 26 23:35:05 crc kubenswrapper[4995]: E0126 23:35:05.916082 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae\": container with ID starting with 52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae not found: ID does not exist" containerID="52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.916127 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae"} err="failed to get container status \"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae\": rpc error: code = NotFound desc = could not find container \"52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae\": container with ID starting with 52052265b360b7089c8eadfec863816abfd52fba79a1929c65ee6f0b6fe885ae not found: ID does not exist" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.916149 4995 scope.go:117] "RemoveContainer" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" Jan 26 23:35:05 crc kubenswrapper[4995]: E0126 23:35:05.916394 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1\": container with ID starting with 1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1 not found: ID does not exist" containerID="1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1" Jan 26 23:35:05 crc kubenswrapper[4995]: I0126 23:35:05.916414 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1"} err="failed to get container status \"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1\": rpc error: code = NotFound desc = could not find container \"1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1\": container with ID starting with 1022dce8b1dcbe0a8574b952dd987484e6c5ec86a828f26c0f85d6bf1903bdc1 not found: ID does not exist" Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.267667 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.271566 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.275259 4995 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Jan 26 23:35:06 crc kubenswrapper[4995]: E0126 23:35:06.275346 4995 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:06 crc kubenswrapper[4995]: I0126 23:35:06.526246 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" path="/var/lib/kubelet/pods/f2bc50a4-5dd7-42df-9279-4d07dd760275/volumes" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.235931 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.412387 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.486573 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") pod \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.486631 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") pod \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\" (UID: \"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd\") " Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.487015 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" (UID: "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.487198 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.493769 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf" (OuterVolumeSpecName: "kube-api-access-bzpdf") pod "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" (UID: "8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd"). InnerVolumeSpecName "kube-api-access-bzpdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.588500 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzpdf\" (UniqueName: \"kubernetes.io/projected/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd-kube-api-access-bzpdf\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.895504 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" event={"ID":"8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd","Type":"ContainerDied","Data":"3e46777b83ef7137b672d4d002a9a57f36054fe8886eaa98beed4e83da6fa179"} Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.895830 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher0966-account-delete-7lmgj" Jan 26 23:35:07 crc kubenswrapper[4995]: I0126 23:35:07.895908 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e46777b83ef7137b672d4d002a9a57f36054fe8886eaa98beed4e83da6fa179" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.434340 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.505072 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.505959 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.506144 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.506247 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.506386 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") pod \"ab805559-bee4-4905-95db-b9fd0da719ed\" (UID: \"ab805559-bee4-4905-95db-b9fd0da719ed\") " Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.507475 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs" (OuterVolumeSpecName: "logs") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.514255 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp" (OuterVolumeSpecName: "kube-api-access-ht7vp") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "kube-api-access-ht7vp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.539135 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.565681 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data" (OuterVolumeSpecName: "config-data") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.587583 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "ab805559-bee4-4905-95db-b9fd0da719ed" (UID: "ab805559-bee4-4905-95db-b9fd0da719ed"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608201 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht7vp\" (UniqueName: \"kubernetes.io/projected/ab805559-bee4-4905-95db-b9fd0da719ed-kube-api-access-ht7vp\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608229 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab805559-bee4-4905-95db-b9fd0da719ed-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608241 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608250 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.608259 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab805559-bee4-4905-95db-b9fd0da719ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.905555 4995 generic.go:334] "Generic (PLEG): container finished" podID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerID="e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1" exitCode=0 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.905737 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerDied","Data":"e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1"} Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912239 4995 generic.go:334] "Generic (PLEG): container finished" podID="ab805559-bee4-4905-95db-b9fd0da719ed" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" exitCode=0 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912383 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerDied","Data":"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c"} Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912479 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912771 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"ab805559-bee4-4905-95db-b9fd0da719ed","Type":"ContainerDied","Data":"b7111921d0bcb4ece6cd10fa5e18b18895898ed7aa2249d33d860e1754a300c5"} Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.912904 4995 scope.go:117] "RemoveContainer" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.913901 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" containerID="cri-o://6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.914153 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" containerID="cri-o://9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.914218 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" containerID="cri-o://551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.914267 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" containerID="cri-o://864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" gracePeriod=30 Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.939400 4995 scope.go:117] "RemoveContainer" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" Jan 26 23:35:08 crc kubenswrapper[4995]: E0126 23:35:08.939982 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c\": container with ID starting with 6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c not found: ID does not exist" containerID="6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.940012 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c"} err="failed to get container status \"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c\": rpc error: code = NotFound desc = could not find container \"6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c\": container with ID starting with 6f5c49ec6dd7153b614113a9e759fb83ec3ffea5355516a9b0a77c791d88642c not found: ID does not exist" Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.963214 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:08 crc kubenswrapper[4995]: I0126 23:35:08.982190 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.161354 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.175638 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-tddnh"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.186969 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.199000 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.207621 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.221174 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-0966-account-create-update-wjl7d"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.231164 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher0966-account-delete-7lmgj"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277119 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277438 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277451 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277466 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277472 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277488 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277495 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277506 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerName="mariadb-account-delete" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277511 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerName="mariadb-account-delete" Jan 26 23:35:09 crc kubenswrapper[4995]: E0126 23:35:09.277523 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277529 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277662 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" containerName="watcher-decision-engine" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277676 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" containerName="mariadb-account-delete" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277686 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-api" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277694 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" containerName="watcher-applier" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.277703 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bc50a4-5dd7-42df-9279-4d07dd760275" containerName="watcher-kuttl-api-log" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.278271 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.291665 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324264 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324338 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324435 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324505 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324535 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.324569 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") pod \"cacd898a-7524-4989-95ce-0b7a05e318ba\" (UID: \"cacd898a-7524-4989-95ce-0b7a05e318ba\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.325333 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs" (OuterVolumeSpecName: "logs") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.353725 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2" (OuterVolumeSpecName: "kube-api-access-6xgz2") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "kube-api-access-6xgz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.361411 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.362447 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.364547 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.367235 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.367411 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-db-secret" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.388041 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.407986 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data" (OuterVolumeSpecName: "config-data") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427702 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427758 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427809 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.427892 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428001 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428017 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cacd898a-7524-4989-95ce-0b7a05e318ba-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428030 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428043 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xgz2\" (UniqueName: \"kubernetes.io/projected/cacd898a-7524-4989-95ce-0b7a05e318ba-kube-api-access-6xgz2\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.428056 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.455491 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "cacd898a-7524-4989-95ce-0b7a05e318ba" (UID: "cacd898a-7524-4989-95ce-0b7a05e318ba"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.529781 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.529874 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.529974 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.530016 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.530071 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/cacd898a-7524-4989-95ce-0b7a05e318ba-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.531319 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.532381 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.559516 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"watcher-db-create-b6hk2\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.559524 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"watcher-test-account-create-update-sq8zx\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.601548 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.697070 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.818005 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834464 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834532 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834597 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834641 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834664 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834681 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834699 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.834720 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") pod \"021b4697-13c5-4573-b049-d089667af404\" (UID: \"021b4697-13c5-4573-b049-d089667af404\") " Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.835262 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.835992 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.840256 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78" (OuterVolumeSpecName: "kube-api-access-2fg78") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "kube-api-access-2fg78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.841000 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts" (OuterVolumeSpecName: "scripts") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.869276 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.894001 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.922965 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946487 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946520 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946533 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/021b4697-13c5-4573-b049-d089667af404-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946544 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946556 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946567 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fg78\" (UniqueName: \"kubernetes.io/projected/021b4697-13c5-4573-b049-d089667af404-kube-api-access-2fg78\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.946574 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.951917 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data" (OuterVolumeSpecName: "config-data") pod "021b4697-13c5-4573-b049-d089667af404" (UID: "021b4697-13c5-4573-b049-d089667af404"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954405 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" exitCode=0 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954429 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" exitCode=2 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954439 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" exitCode=0 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954450 4995 generic.go:334] "Generic (PLEG): container finished" podID="021b4697-13c5-4573-b049-d089667af404" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" exitCode=0 Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954500 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954529 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954553 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954564 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"021b4697-13c5-4573-b049-d089667af404","Type":"ContainerDied","Data":"a273f23b5e6b3152076243b8eb373d6a89966d29af4ec92e1e40164b0324f64f"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954582 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.954719 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.965448 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"cacd898a-7524-4989-95ce-0b7a05e318ba","Type":"ContainerDied","Data":"a4ce0a9663c549496780173c4daf62f761575e14888d812de2623b4acc727c19"} Jan 26 23:35:09 crc kubenswrapper[4995]: I0126 23:35:09.965522 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.006069 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.019949 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.037654 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.047467 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/021b4697-13c5-4573-b049-d089667af404-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049413 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049763 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049785 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049804 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049813 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049837 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049843 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.049852 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.049860 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050018 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="proxy-httpd" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050035 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-central-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050043 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="ceilometer-notification-agent" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.050055 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="021b4697-13c5-4573-b049-d089667af404" containerName="sg-core" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.051419 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.054448 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.054655 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.054820 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.060136 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.101716 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.112204 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.125970 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.134935 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.152017 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157270 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157332 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157543 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157586 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157626 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157674 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.157697 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.207517 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.241765 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.243430 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243476 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243502 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.243952 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243983 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.243997 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.244603 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.244645 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.244674 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: E0126 23:35:10.245036 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245065 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245080 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245462 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245486 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245797 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.245816 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.247589 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.247636 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248431 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248470 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248946 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.248999 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.250689 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.250718 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251037 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251067 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251476 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251498 4995 scope.go:117] "RemoveContainer" containerID="9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251720 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7"} err="failed to get container status \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": rpc error: code = NotFound desc = could not find container \"9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7\": container with ID starting with 9865e79d901813a6af4a58256865095703786ae29223fd1a05cb7bf1dbccf0d7 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251743 4995 scope.go:117] "RemoveContainer" containerID="551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.251982 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039"} err="failed to get container status \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": rpc error: code = NotFound desc = could not find container \"551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039\": container with ID starting with 551b0a9708fc25a2dbbee9cca6aa1e66078eab283c6f3e09473147b673132039 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252003 4995 scope.go:117] "RemoveContainer" containerID="864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252242 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04"} err="failed to get container status \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": rpc error: code = NotFound desc = could not find container \"864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04\": container with ID starting with 864da56ad3d63bf086aadb27e82b46b9032ec7b696fc64c3a80b0d518040ec04 not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252262 4995 scope.go:117] "RemoveContainer" containerID="6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252463 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f"} err="failed to get container status \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": rpc error: code = NotFound desc = could not find container \"6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f\": container with ID starting with 6fbb8136f178385bd3dacdb0433a0118677b8a51c7ee8e28da34de2d218eed3f not found: ID does not exist" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.252482 4995 scope.go:117] "RemoveContainer" containerID="e07eaa72eb177eaf2a37100cc97cdd1c26f5ab5989805c27ed8f959646687ff1" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.258941 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.258986 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259016 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259031 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259060 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259130 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259156 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259221 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.259624 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.263232 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.264764 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.264809 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.268805 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.273803 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.280793 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.289298 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"ceilometer-0\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.407221 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.467416 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.529998 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="021b4697-13c5-4573-b049-d089667af404" path="/var/lib/kubelet/pods/021b4697-13c5-4573-b049-d089667af404/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.530908 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a18765-f113-401c-850b-e585b2f3bd59" path="/var/lib/kubelet/pods/32a18765-f113-401c-850b-e585b2f3bd59/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.531521 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd" path="/var/lib/kubelet/pods/8ebd88b6-9f7f-44c2-89b0-3efd2d1517bd/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.532880 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab805559-bee4-4905-95db-b9fd0da719ed" path="/var/lib/kubelet/pods/ab805559-bee4-4905-95db-b9fd0da719ed/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.533535 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cacd898a-7524-4989-95ce-0b7a05e318ba" path="/var/lib/kubelet/pods/cacd898a-7524-4989-95ce-0b7a05e318ba/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.534125 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d487adb0-ddf0-4932-9fad-09dfb2de1d00" path="/var/lib/kubelet/pods/d487adb0-ddf0-4932-9fad-09dfb2de1d00/volumes" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.893971 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.894299 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.984139 4995 generic.go:334] "Generic (PLEG): container finished" podID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerID="eaa76726f01faaa0a08761d9ea0a24bad284c08bc58814b2904115408ab201e0" exitCode=0 Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.984284 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-b6hk2" event={"ID":"0e413561-4428-409c-9ca8-2eb61cbe1489","Type":"ContainerDied","Data":"eaa76726f01faaa0a08761d9ea0a24bad284c08bc58814b2904115408ab201e0"} Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.984388 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-b6hk2" event={"ID":"0e413561-4428-409c-9ca8-2eb61cbe1489","Type":"ContainerStarted","Data":"c02f06581a692f6b91fcc1f7bac610f1c9f4543019b9d28ecf55c95987cc2208"} Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.986742 4995 generic.go:334] "Generic (PLEG): container finished" podID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerID="30656b19d1917eb3dd412a07deb00ccc5461cf48e1c2a15363c20a1572d6ee9c" exitCode=0 Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.986812 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" event={"ID":"949c118d-bfd2-4707-9091-abc3434a4fb6","Type":"ContainerDied","Data":"30656b19d1917eb3dd412a07deb00ccc5461cf48e1c2a15363c20a1572d6ee9c"} Jan 26 23:35:10 crc kubenswrapper[4995]: I0126 23:35:10.986837 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" event={"ID":"949c118d-bfd2-4707-9091-abc3434a4fb6","Type":"ContainerStarted","Data":"afc478abc4ef1cb2f494320c7726868f0a9be51ea4ba0b187258c18c53e38280"} Jan 26 23:35:11 crc kubenswrapper[4995]: I0126 23:35:11.042050 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.000799 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34"} Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.001235 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"d28b1e3d26673a8db157b6635da5ecd48dd8d6acf8388d4c2dd8e2ae15407e7f"} Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.559370 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.563030 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712078 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") pod \"949c118d-bfd2-4707-9091-abc3434a4fb6\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712166 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") pod \"0e413561-4428-409c-9ca8-2eb61cbe1489\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712202 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") pod \"949c118d-bfd2-4707-9091-abc3434a4fb6\" (UID: \"949c118d-bfd2-4707-9091-abc3434a4fb6\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712474 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "949c118d-bfd2-4707-9091-abc3434a4fb6" (UID: "949c118d-bfd2-4707-9091-abc3434a4fb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712731 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") pod \"0e413561-4428-409c-9ca8-2eb61cbe1489\" (UID: \"0e413561-4428-409c-9ca8-2eb61cbe1489\") " Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.712829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e413561-4428-409c-9ca8-2eb61cbe1489" (UID: "0e413561-4428-409c-9ca8-2eb61cbe1489"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.713156 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949c118d-bfd2-4707-9091-abc3434a4fb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.713173 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e413561-4428-409c-9ca8-2eb61cbe1489-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.715779 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h" (OuterVolumeSpecName: "kube-api-access-sx58h") pod "949c118d-bfd2-4707-9091-abc3434a4fb6" (UID: "949c118d-bfd2-4707-9091-abc3434a4fb6"). InnerVolumeSpecName "kube-api-access-sx58h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.717677 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r" (OuterVolumeSpecName: "kube-api-access-plc7r") pod "0e413561-4428-409c-9ca8-2eb61cbe1489" (UID: "0e413561-4428-409c-9ca8-2eb61cbe1489"). InnerVolumeSpecName "kube-api-access-plc7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.814080 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plc7r\" (UniqueName: \"kubernetes.io/projected/0e413561-4428-409c-9ca8-2eb61cbe1489-kube-api-access-plc7r\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:12 crc kubenswrapper[4995]: I0126 23:35:12.814130 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx58h\" (UniqueName: \"kubernetes.io/projected/949c118d-bfd2-4707-9091-abc3434a4fb6-kube-api-access-sx58h\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.010807 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" event={"ID":"949c118d-bfd2-4707-9091-abc3434a4fb6","Type":"ContainerDied","Data":"afc478abc4ef1cb2f494320c7726868f0a9be51ea4ba0b187258c18c53e38280"} Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.010840 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afc478abc4ef1cb2f494320c7726868f0a9be51ea4ba0b187258c18c53e38280" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.010835 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-test-account-create-update-sq8zx" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.012879 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-db-create-b6hk2" event={"ID":"0e413561-4428-409c-9ca8-2eb61cbe1489","Type":"ContainerDied","Data":"c02f06581a692f6b91fcc1f7bac610f1c9f4543019b9d28ecf55c95987cc2208"} Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.012940 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c02f06581a692f6b91fcc1f7bac610f1c9f4543019b9d28ecf55c95987cc2208" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.012938 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-db-create-b6hk2" Jan 26 23:35:13 crc kubenswrapper[4995]: I0126 23:35:13.015374 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2"} Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.026046 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5"} Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.641432 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:35:14 crc kubenswrapper[4995]: E0126 23:35:14.642015 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerName="mariadb-database-create" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642031 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerName="mariadb-database-create" Jan 26 23:35:14 crc kubenswrapper[4995]: E0126 23:35:14.642060 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerName="mariadb-account-create-update" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642068 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerName="mariadb-account-create-update" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642208 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" containerName="mariadb-database-create" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642231 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" containerName="mariadb-account-create-update" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.642720 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.644450 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-crvrq" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.644937 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.653890 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699593 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699694 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699732 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.699771 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801119 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801413 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801537 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.801682 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.805713 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.806612 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.808593 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.816055 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"watcher-kuttl-db-sync-9vqnd\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:14 crc kubenswrapper[4995]: I0126 23:35:14.966120 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.055220 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerStarted","Data":"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8"} Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.056697 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.094695 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.885939571 podStartE2EDuration="5.094678084s" podCreationTimestamp="2026-01-26 23:35:10 +0000 UTC" firstStartedPulling="2026-01-26 23:35:11.037331259 +0000 UTC m=+1615.202038754" lastFinishedPulling="2026-01-26 23:35:14.246069802 +0000 UTC m=+1618.410777267" observedRunningTime="2026-01-26 23:35:15.083557335 +0000 UTC m=+1619.248264810" watchObservedRunningTime="2026-01-26 23:35:15.094678084 +0000 UTC m=+1619.259385549" Jan 26 23:35:15 crc kubenswrapper[4995]: I0126 23:35:15.498963 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:35:16 crc kubenswrapper[4995]: I0126 23:35:16.063932 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerStarted","Data":"19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3"} Jan 26 23:35:16 crc kubenswrapper[4995]: I0126 23:35:16.064224 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerStarted","Data":"b69275d2753e53a55507d36fe3830ac60263304214a446660b94840a30af23f6"} Jan 26 23:35:16 crc kubenswrapper[4995]: I0126 23:35:16.080423 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" podStartSLOduration=2.08040786 podStartE2EDuration="2.08040786s" podCreationTimestamp="2026-01-26 23:35:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:16.077514057 +0000 UTC m=+1620.242221522" watchObservedRunningTime="2026-01-26 23:35:16.08040786 +0000 UTC m=+1620.245115325" Jan 26 23:35:18 crc kubenswrapper[4995]: I0126 23:35:18.632135 4995 scope.go:117] "RemoveContainer" containerID="2db44657dba863e9126ee66626ff3e903712a488e479e67578bed8c8358c38cb" Jan 26 23:35:18 crc kubenswrapper[4995]: I0126 23:35:18.693223 4995 scope.go:117] "RemoveContainer" containerID="38e04a8783a7a6b7dfb30a4ee34a81ba70fceb4a22c66572b6533babbef0e4a8" Jan 26 23:35:18 crc kubenswrapper[4995]: I0126 23:35:18.720561 4995 scope.go:117] "RemoveContainer" containerID="558c3ee7288987b85477ab6a956972ed10ae51e028f06cd7ca485975cd8be8ff" Jan 26 23:35:19 crc kubenswrapper[4995]: I0126 23:35:19.091759 4995 generic.go:334] "Generic (PLEG): container finished" podID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerID="19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3" exitCode=0 Jan 26 23:35:19 crc kubenswrapper[4995]: I0126 23:35:19.091839 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerDied","Data":"19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3"} Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.533226 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648591 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648741 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648784 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.648837 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") pod \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\" (UID: \"f3e560ee-8e9f-41b9-a407-6879c581e5b5\") " Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.654410 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj" (OuterVolumeSpecName: "kube-api-access-2wjbj") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "kube-api-access-2wjbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.659426 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.672376 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.698039 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data" (OuterVolumeSpecName: "config-data") pod "f3e560ee-8e9f-41b9-a407-6879c581e5b5" (UID: "f3e560ee-8e9f-41b9-a407-6879c581e5b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750808 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wjbj\" (UniqueName: \"kubernetes.io/projected/f3e560ee-8e9f-41b9-a407-6879c581e5b5-kube-api-access-2wjbj\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750838 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750848 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:20 crc kubenswrapper[4995]: I0126 23:35:20.750857 4995 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f3e560ee-8e9f-41b9-a407-6879c581e5b5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.114359 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" event={"ID":"f3e560ee-8e9f-41b9-a407-6879c581e5b5","Type":"ContainerDied","Data":"b69275d2753e53a55507d36fe3830ac60263304214a446660b94840a30af23f6"} Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.114398 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b69275d2753e53a55507d36fe3830ac60263304214a446660b94840a30af23f6" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.114423 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.373504 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: E0126 23:35:21.373928 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerName="watcher-kuttl-db-sync" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.373954 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerName="watcher-kuttl-db-sync" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.375207 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" containerName="watcher-kuttl-db-sync" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.376293 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.379566 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-watcher-kuttl-dockercfg-crvrq" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.380010 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-api-config-data" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.390178 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.429570 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.430972 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.478841 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.514585 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.515623 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.518472 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-applier-config-data" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.520780 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.543662 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.545065 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.548238 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-decision-engine-config-data" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568612 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568655 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568674 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568698 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568723 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568742 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568762 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568779 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568798 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568824 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568850 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.568895 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.581471 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670426 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670484 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670533 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670551 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670571 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670593 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670615 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670632 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670666 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670684 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670724 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670739 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670767 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670782 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.670817 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671081 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671113 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671148 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671194 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671213 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671251 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671277 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671306 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.671975 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.672652 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.676747 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.677201 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.681074 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.681760 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.685671 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.686581 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.690884 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.691370 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.694303 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"watcher-kuttl-api-0\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.695926 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"watcher-kuttl-api-1\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.730003 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.764305 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773241 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773307 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773341 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773368 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773384 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773406 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773423 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773457 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773485 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773511 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.773533 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.779905 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.780465 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.781714 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.783596 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.784539 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.784803 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.786935 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.790575 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.791670 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.798870 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"watcher-kuttl-decision-engine-0\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.805250 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"watcher-kuttl-applier-0\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.841573 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:21 crc kubenswrapper[4995]: I0126 23:35:21.863599 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.209489 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.311936 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:35:22 crc kubenswrapper[4995]: W0126 23:35:22.322268 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod708f8ff2_4449_41ed_9436_28f9aae04852.slice/crio-3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f WatchSource:0}: Error finding container 3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f: Status 404 returned error can't find the container with id 3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f Jan 26 23:35:22 crc kubenswrapper[4995]: W0126 23:35:22.460722 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45aef819_2cda_443f_82ef_6e54a5be4261.slice/crio-49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad WatchSource:0}: Error finding container 49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad: Status 404 returned error can't find the container with id 49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.466404 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:35:22 crc kubenswrapper[4995]: I0126 23:35:22.493057 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:35:22 crc kubenswrapper[4995]: W0126 23:35:22.495864 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5820f715_2962_4319_b398_fa2a9975c5ea.slice/crio-0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012 WatchSource:0}: Error finding container 0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012: Status 404 returned error can't find the container with id 0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012 Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.157593 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerStarted","Data":"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.157646 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerStarted","Data":"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.157660 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerStarted","Data":"3112ece43fb6b0f1c6030da3f87999145998e3975488ad4447ac4aa3ae034350"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.158130 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.159986 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerStarted","Data":"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.160140 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerStarted","Data":"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.160235 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerStarted","Data":"3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.160532 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.162481 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerStarted","Data":"8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.162536 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerStarted","Data":"49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.165829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerStarted","Data":"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.165868 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerStarted","Data":"0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012"} Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.190250 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-0" podStartSLOduration=2.189450485 podStartE2EDuration="2.189450485s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.186133882 +0000 UTC m=+1627.350841387" watchObservedRunningTime="2026-01-26 23:35:23.189450485 +0000 UTC m=+1627.354157980" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.211452 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podStartSLOduration=2.211436487 podStartE2EDuration="2.211436487s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.208197556 +0000 UTC m=+1627.372905031" watchObservedRunningTime="2026-01-26 23:35:23.211436487 +0000 UTC m=+1627.376143952" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.261231 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-api-1" podStartSLOduration=2.261212717 podStartE2EDuration="2.261212717s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.239389049 +0000 UTC m=+1627.404096524" watchObservedRunningTime="2026-01-26 23:35:23.261212717 +0000 UTC m=+1627.425920182" Jan 26 23:35:23 crc kubenswrapper[4995]: I0126 23:35:23.268563 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podStartSLOduration=2.268543431 podStartE2EDuration="2.268543431s" podCreationTimestamp="2026-01-26 23:35:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:35:23.260342745 +0000 UTC m=+1627.425050210" watchObservedRunningTime="2026-01-26 23:35:23.268543431 +0000 UTC m=+1627.433250906" Jan 26 23:35:25 crc kubenswrapper[4995]: I0126 23:35:25.299930 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:25 crc kubenswrapper[4995]: I0126 23:35:25.655424 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:26 crc kubenswrapper[4995]: I0126 23:35:26.730837 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:26 crc kubenswrapper[4995]: I0126 23:35:26.764926 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:26 crc kubenswrapper[4995]: I0126 23:35:26.842713 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.731169 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.737974 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.766008 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.773136 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.842330 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.865349 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.895643 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:31 crc kubenswrapper[4995]: I0126 23:35:31.907283 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.282000 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.289493 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.291641 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.305629 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:35:32 crc kubenswrapper[4995]: I0126 23:35:32.310721 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377182 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377613 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" containerID="cri-o://f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377646 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" containerID="cri-o://f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377754 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" containerID="cri-o://52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.377760 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" containerID="cri-o://b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" gracePeriod=30 Jan 26 23:35:34 crc kubenswrapper[4995]: I0126 23:35:34.388245 4995 prober.go:107] "Probe failed" probeType="Readiness" pod="watcher-kuttl-default/ceilometer-0" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.214:3000/\": EOF" Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309145 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" exitCode=0 Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309173 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" exitCode=2 Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309182 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" exitCode=0 Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309184 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8"} Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309246 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5"} Jan 26 23:35:35 crc kubenswrapper[4995]: I0126 23:35:35.309266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34"} Jan 26 23:35:38 crc kubenswrapper[4995]: I0126 23:35:38.954859 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098466 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098532 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098561 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098594 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098649 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098782 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098858 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.098884 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") pod \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\" (UID: \"f41652c5-25d5-4bb9-bbfc-c460448d0ec6\") " Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.099205 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.099487 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.105029 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts" (OuterVolumeSpecName: "scripts") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.105955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx" (OuterVolumeSpecName: "kube-api-access-8sqrx") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "kube-api-access-8sqrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.131613 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.149399 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.197289 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data" (OuterVolumeSpecName: "config-data") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202845 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sqrx\" (UniqueName: \"kubernetes.io/projected/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-kube-api-access-8sqrx\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202890 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202909 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202927 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202945 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202961 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.202978 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.206641 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f41652c5-25d5-4bb9-bbfc-c460448d0ec6" (UID: "f41652c5-25d5-4bb9-bbfc-c460448d0ec6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.304413 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f41652c5-25d5-4bb9-bbfc-c460448d0ec6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352891 4995 generic.go:334] "Generic (PLEG): container finished" podID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" exitCode=0 Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352928 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2"} Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352953 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"f41652c5-25d5-4bb9-bbfc-c460448d0ec6","Type":"ContainerDied","Data":"d28b1e3d26673a8db157b6635da5ecd48dd8d6acf8388d4c2dd8e2ae15407e7f"} Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352975 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.352987 4995 scope.go:117] "RemoveContainer" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.369564 4995 scope.go:117] "RemoveContainer" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.386988 4995 scope.go:117] "RemoveContainer" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.391652 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.405666 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.418335 4995 scope.go:117] "RemoveContainer" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.418704 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422007 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422049 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422071 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422085 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422143 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422156 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.422177 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422189 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422495 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="sg-core" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422509 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="proxy-httpd" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422518 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-central-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.422530 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" containerName="ceilometer-notification-agent" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.425008 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.427662 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.427996 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.428232 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.431912 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.454758 4995 scope.go:117] "RemoveContainer" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.455272 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8\": container with ID starting with f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8 not found: ID does not exist" containerID="f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.455311 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8"} err="failed to get container status \"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8\": rpc error: code = NotFound desc = could not find container \"f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8\": container with ID starting with f62cce54b8c29aba333ae310761fa65e04fe6ae0246d0e74e4449d0994c510d8 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.455345 4995 scope.go:117] "RemoveContainer" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.456819 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5\": container with ID starting with 52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5 not found: ID does not exist" containerID="52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.456856 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5"} err="failed to get container status \"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5\": rpc error: code = NotFound desc = could not find container \"52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5\": container with ID starting with 52a0cfa8b2f52d45060199aa1aad6e32be197a582fe7e141be16522fdb68bbd5 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.456876 4995 scope.go:117] "RemoveContainer" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.457278 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2\": container with ID starting with b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2 not found: ID does not exist" containerID="b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.457299 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2"} err="failed to get container status \"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2\": rpc error: code = NotFound desc = could not find container \"b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2\": container with ID starting with b35ec1ceab40e00d0beee93381a4705ddb980900b5beb2b390d7670c3b9034e2 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.457312 4995 scope.go:117] "RemoveContainer" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" Jan 26 23:35:39 crc kubenswrapper[4995]: E0126 23:35:39.459786 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34\": container with ID starting with f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34 not found: ID does not exist" containerID="f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.459813 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34"} err="failed to get container status \"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34\": rpc error: code = NotFound desc = could not find container \"f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34\": container with ID starting with f87739f4a73a3fd12f1ef94beddbbac2da59cf4ca8729dcebf390e3e06bf3c34 not found: ID does not exist" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.507859 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.507921 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.507963 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508065 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508084 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508119 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508144 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.508265 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.609756 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.609821 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.609845 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610187 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610427 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610439 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610738 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610804 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.610853 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.611258 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.615479 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.615602 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.616445 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.616845 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.631775 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.636217 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"ceilometer-0\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:39 crc kubenswrapper[4995]: I0126 23:35:39.755854 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.345688 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.371738 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"2fa9a5a009be0079265a0dcc50a1983900de0b2654a05dca9c424e408f133efc"} Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.530356 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41652c5-25d5-4bb9-bbfc-c460448d0ec6" path="/var/lib/kubelet/pods/f41652c5-25d5-4bb9-bbfc-c460448d0ec6/volumes" Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.893393 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:35:40 crc kubenswrapper[4995]: I0126 23:35:40.893477 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:35:41 crc kubenswrapper[4995]: I0126 23:35:41.383230 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201"} Jan 26 23:35:42 crc kubenswrapper[4995]: I0126 23:35:42.391567 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66"} Jan 26 23:35:42 crc kubenswrapper[4995]: I0126 23:35:42.391829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b"} Jan 26 23:35:44 crc kubenswrapper[4995]: I0126 23:35:44.416076 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerStarted","Data":"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86"} Jan 26 23:35:44 crc kubenswrapper[4995]: I0126 23:35:44.416574 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:35:44 crc kubenswrapper[4995]: I0126 23:35:44.438090 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=2.188338234 podStartE2EDuration="5.438075067s" podCreationTimestamp="2026-01-26 23:35:39 +0000 UTC" firstStartedPulling="2026-01-26 23:35:40.362018182 +0000 UTC m=+1644.526725647" lastFinishedPulling="2026-01-26 23:35:43.611755005 +0000 UTC m=+1647.776462480" observedRunningTime="2026-01-26 23:35:44.432862037 +0000 UTC m=+1648.597569502" watchObservedRunningTime="2026-01-26 23:35:44.438075067 +0000 UTC m=+1648.602782532" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.165660 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.167562 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.169624 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-scripts" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.171773 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"watcher-kuttl-config-data" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.182519 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302121 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302200 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302351 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.302491 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404221 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404278 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404328 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.404444 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.410507 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.410560 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.410749 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.422005 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"watcher-kuttl-db-purge-29491176-84qmp\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.489181 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:00 crc kubenswrapper[4995]: I0126 23:36:00.992139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:01 crc kubenswrapper[4995]: W0126 23:36:00.997967 4995 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27579212_06da_4939_bada_9ecd375faf00.slice/crio-ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da WatchSource:0}: Error finding container ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da: Status 404 returned error can't find the container with id ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da Jan 26 23:36:01 crc kubenswrapper[4995]: I0126 23:36:01.600932 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerStarted","Data":"75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2"} Jan 26 23:36:01 crc kubenswrapper[4995]: I0126 23:36:01.601197 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerStarted","Data":"ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da"} Jan 26 23:36:01 crc kubenswrapper[4995]: I0126 23:36:01.621417 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" podStartSLOduration=1.621391226 podStartE2EDuration="1.621391226s" podCreationTimestamp="2026-01-26 23:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:36:01.615739414 +0000 UTC m=+1665.780446879" watchObservedRunningTime="2026-01-26 23:36:01.621391226 +0000 UTC m=+1665.786098721" Jan 26 23:36:03 crc kubenswrapper[4995]: I0126 23:36:03.621820 4995 generic.go:334] "Generic (PLEG): container finished" podID="27579212-06da-4939-bada-9ecd375faf00" containerID="75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2" exitCode=0 Jan 26 23:36:03 crc kubenswrapper[4995]: I0126 23:36:03.621862 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerDied","Data":"75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2"} Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.014346 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191056 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191245 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191356 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.191536 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") pod \"27579212-06da-4939-bada-9ecd375faf00\" (UID: \"27579212-06da-4939-bada-9ecd375faf00\") " Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.203829 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume" (OuterVolumeSpecName: "scripts-volume") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "scripts-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.204678 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29" (OuterVolumeSpecName: "kube-api-access-9jx29") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "kube-api-access-9jx29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.217751 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.235911 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data" (OuterVolumeSpecName: "config-data") pod "27579212-06da-4939-bada-9ecd375faf00" (UID: "27579212-06da-4939-bada-9ecd375faf00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294622 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294692 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jx29\" (UniqueName: \"kubernetes.io/projected/27579212-06da-4939-bada-9ecd375faf00-kube-api-access-9jx29\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294719 4995 reconciler_common.go:293] "Volume detached for volume \"scripts-volume\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-scripts-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.294744 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27579212-06da-4939-bada-9ecd375faf00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.645893 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" event={"ID":"27579212-06da-4939-bada-9ecd375faf00","Type":"ContainerDied","Data":"ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da"} Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.645993 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed356e303cfdfa896f07d3e494ca701058e6cdc789616e57d799d8c441ccb6da" Jan 26 23:36:05 crc kubenswrapper[4995]: I0126 23:36:05.646063 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.536835 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.542986 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-sync-9vqnd"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.555565 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.563610 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-db-purge-29491176-84qmp"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.584477 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:07 crc kubenswrapper[4995]: E0126 23:36:07.584796 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27579212-06da-4939-bada-9ecd375faf00" containerName="watcher-db-manage" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.584814 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="27579212-06da-4939-bada-9ecd375faf00" containerName="watcher-db-manage" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.584982 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="27579212-06da-4939-bada-9ecd375faf00" containerName="watcher-db-manage" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.585775 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.603492 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.686304 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.686523 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" containerID="cri-o://594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.686635 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-1" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" containerID="cri-o://9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.719466 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.719703 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" containerID="cri-o://9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.719926 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-api-0" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" containerID="cri-o://6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.743553 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.743664 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.775373 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.776004 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-applier-0" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" containerID="cri-o://40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.793983 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.804816 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" containerID="cri-o://8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057" gracePeriod=30 Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.847947 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.848033 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.848765 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.877771 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"watchertest-account-delete-fw4t2\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:07 crc kubenswrapper[4995]: I0126 23:36:07.900470 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.433477 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.526788 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27579212-06da-4939-bada-9ecd375faf00" path="/var/lib/kubelet/pods/27579212-06da-4939-bada-9ecd375faf00/volumes" Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.527646 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e560ee-8e9f-41b9-a407-6879c581e5b5" path="/var/lib/kubelet/pods/f3e560ee-8e9f-41b9-a407-6879c581e5b5/volumes" Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.678819 4995 generic.go:334] "Generic (PLEG): container finished" podID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" exitCode=143 Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.679053 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerDied","Data":"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.681320 4995 generic.go:334] "Generic (PLEG): container finished" podID="708f8ff2-4449-41ed-9436-28f9aae04852" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" exitCode=143 Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.681352 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerDied","Data":"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.682623 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerStarted","Data":"1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.682649 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerStarted","Data":"cc127f351dfb19f4952ec734c11c273f7c077bf4bf86c7838ab1e91d22fa9c24"} Jan 26 23:36:08 crc kubenswrapper[4995]: I0126 23:36:08.700410 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" podStartSLOduration=1.700389576 podStartE2EDuration="1.700389576s" podCreationTimestamp="2026-01-26 23:36:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 23:36:08.693065473 +0000 UTC m=+1672.857772958" watchObservedRunningTime="2026-01-26 23:36:08.700389576 +0000 UTC m=+1672.865097051" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.069328 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199364 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199512 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199571 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199642 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199684 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.199728 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") pod \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\" (UID: \"7ec430d5-4541-494e-88bc-d6cb00ceb6fc\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.207264 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs" (OuterVolumeSpecName: "logs") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.224999 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm" (OuterVolumeSpecName: "kube-api-access-d8mtm") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "kube-api-access-d8mtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.244380 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.260143 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.283237 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data" (OuterVolumeSpecName: "config-data") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306300 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306330 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306369 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306381 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8mtm\" (UniqueName: \"kubernetes.io/projected/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-kube-api-access-d8mtm\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.306396 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.322448 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "7ec430d5-4541-494e-88bc-d6cb00ceb6fc" (UID: "7ec430d5-4541-494e-88bc-d6cb00ceb6fc"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.353114 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.408369 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/7ec430d5-4541-494e-88bc-d6cb00ceb6fc-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509425 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509524 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509692 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509734 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.509761 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") pod \"708f8ff2-4449-41ed-9436-28f9aae04852\" (UID: \"708f8ff2-4449-41ed-9436-28f9aae04852\") " Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.510127 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs" (OuterVolumeSpecName: "logs") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.510646 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/708f8ff2-4449-41ed-9436-28f9aae04852-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.524378 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z" (OuterVolumeSpecName: "kube-api-access-q948z") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "kube-api-access-q948z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.543755 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.559872 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.570904 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data" (OuterVolumeSpecName: "config-data") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.600180 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "708f8ff2-4449-41ed-9436-28f9aae04852" (UID: "708f8ff2-4449-41ed-9436-28f9aae04852"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612234 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612276 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612286 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q948z\" (UniqueName: \"kubernetes.io/projected/708f8ff2-4449-41ed-9436-28f9aae04852-kube-api-access-q948z\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612296 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.612307 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/708f8ff2-4449-41ed-9436-28f9aae04852-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.693318 4995 generic.go:334] "Generic (PLEG): container finished" podID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerID="1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183" exitCode=0 Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.693420 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerDied","Data":"1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695330 4995 generic.go:334] "Generic (PLEG): container finished" podID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" exitCode=0 Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695384 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerDied","Data":"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695409 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-0" event={"ID":"7ec430d5-4541-494e-88bc-d6cb00ceb6fc","Type":"ContainerDied","Data":"3112ece43fb6b0f1c6030da3f87999145998e3975488ad4447ac4aa3ae034350"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695431 4995 scope.go:117] "RemoveContainer" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.695601 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-0" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702495 4995 generic.go:334] "Generic (PLEG): container finished" podID="708f8ff2-4449-41ed-9436-28f9aae04852" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" exitCode=0 Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702547 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerDied","Data":"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-api-1" event={"ID":"708f8ff2-4449-41ed-9436-28f9aae04852","Type":"ContainerDied","Data":"3552157133b281fe15163acc8920527c58a71f9a3504fc6611d9bb3087f9461f"} Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.702651 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-api-1" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.731238 4995 scope.go:117] "RemoveContainer" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.771011 4995 scope.go:117] "RemoveContainer" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.771523 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229\": container with ID starting with 6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229 not found: ID does not exist" containerID="6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.771579 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229"} err="failed to get container status \"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229\": rpc error: code = NotFound desc = could not find container \"6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229\": container with ID starting with 6ca37f1c2bbbad2ed958bf48473cfe38354609c731672c7c62e589f67f4dd229 not found: ID does not exist" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.771609 4995 scope.go:117] "RemoveContainer" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.772358 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3\": container with ID starting with 9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3 not found: ID does not exist" containerID="9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.772418 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3"} err="failed to get container status \"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3\": rpc error: code = NotFound desc = could not find container \"9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3\": container with ID starting with 9ec289174f12fdf75212dcdb7c3f96d2e6f9e47e615172daac8c85f7057ae5a3 not found: ID does not exist" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.772442 4995 scope.go:117] "RemoveContainer" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.773672 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.790805 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.803854 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-1"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.811041 4995 scope.go:117] "RemoveContainer" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.823979 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.831657 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-api-0"] Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.878737 4995 scope.go:117] "RemoveContainer" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.881810 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017\": container with ID starting with 9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017 not found: ID does not exist" containerID="9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.881855 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017"} err="failed to get container status \"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017\": rpc error: code = NotFound desc = could not find container \"9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017\": container with ID starting with 9e63faddb561685df97b37984c3b84ee0a9b0db349e27c33f17a50b88523c017 not found: ID does not exist" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.881882 4995 scope.go:117] "RemoveContainer" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" Jan 26 23:36:09 crc kubenswrapper[4995]: E0126 23:36:09.882192 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae\": container with ID starting with 594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae not found: ID does not exist" containerID="594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae" Jan 26 23:36:09 crc kubenswrapper[4995]: I0126 23:36:09.882212 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae"} err="failed to get container status \"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae\": rpc error: code = NotFound desc = could not find container \"594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae\": container with ID starting with 594b941d82bea32421b428adab890cf1a4d62297b708bb2579d18cdafbaf97ae not found: ID does not exist" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.048197 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.526298 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" path="/var/lib/kubelet/pods/708f8ff2-4449-41ed-9436-28f9aae04852/volumes" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.527086 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" path="/var/lib/kubelet/pods/7ec430d5-4541-494e-88bc-d6cb00ceb6fc/volumes" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.629051 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710501 4995 generic.go:334] "Generic (PLEG): container finished" podID="5820f715-2962-4319-b398-fa2a9975c5ea" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" exitCode=0 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710539 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerDied","Data":"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f"} Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710567 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-applier-0" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710599 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-applier-0" event={"ID":"5820f715-2962-4319-b398-fa2a9975c5ea","Type":"ContainerDied","Data":"0dbe8986d915bf4ee76e7de6a0a0f090025a5095fbf6910a09a8181b9e9f8012"} Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.710623 4995 scope.go:117] "RemoveContainer" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713392 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" containerID="cri-o://f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713406 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" containerID="cri-o://b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713463 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" containerID="cri-o://b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.713456 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="watcher-kuttl-default/ceilometer-0" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" containerID="cri-o://af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" gracePeriod=30 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739628 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739888 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739949 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.739977 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.740048 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") pod \"5820f715-2962-4319-b398-fa2a9975c5ea\" (UID: \"5820f715-2962-4319-b398-fa2a9975c5ea\") " Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.740373 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs" (OuterVolumeSpecName: "logs") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.744193 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst" (OuterVolumeSpecName: "kube-api-access-vjxst") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "kube-api-access-vjxst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.772257 4995 scope.go:117] "RemoveContainer" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" Jan 26 23:36:10 crc kubenswrapper[4995]: E0126 23:36:10.772700 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f\": container with ID starting with 40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f not found: ID does not exist" containerID="40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.772724 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f"} err="failed to get container status \"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f\": rpc error: code = NotFound desc = could not find container \"40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f\": container with ID starting with 40634979e668dbef892421f7c4e122c522d2aff10f2da6cc2a08a140521c2e5f not found: ID does not exist" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.843676 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjxst\" (UniqueName: \"kubernetes.io/projected/5820f715-2962-4319-b398-fa2a9975c5ea-kube-api-access-vjxst\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.843707 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5820f715-2962-4319-b398-fa2a9975c5ea-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.854228 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.878358 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data" (OuterVolumeSpecName: "config-data") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.897475 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.897527 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.897564 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.898150 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.898194 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" gracePeriod=600 Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.930316 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "5820f715-2962-4319-b398-fa2a9975c5ea" (UID: "5820f715-2962-4319-b398-fa2a9975c5ea"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.947120 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.947148 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:10 crc kubenswrapper[4995]: I0126 23:36:10.947165 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/5820f715-2962-4319-b398-fa2a9975c5ea-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:11 crc kubenswrapper[4995]: E0126 23:36:11.054349 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.060201 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.062971 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-applier-0"] Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.175477 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.355441 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") pod \"b11fff4a-980d-40c6-a480-3f188cda47bc\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.355593 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") pod \"b11fff4a-980d-40c6-a480-3f188cda47bc\" (UID: \"b11fff4a-980d-40c6-a480-3f188cda47bc\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.356044 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b11fff4a-980d-40c6-a480-3f188cda47bc" (UID: "b11fff4a-980d-40c6-a480-3f188cda47bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.358632 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg" (OuterVolumeSpecName: "kube-api-access-7t7lg") pod "b11fff4a-980d-40c6-a480-3f188cda47bc" (UID: "b11fff4a-980d-40c6-a480-3f188cda47bc"). InnerVolumeSpecName "kube-api-access-7t7lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.457571 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t7lg\" (UniqueName: \"kubernetes.io/projected/b11fff4a-980d-40c6-a480-3f188cda47bc-kube-api-access-7t7lg\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.457608 4995 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b11fff4a-980d-40c6-a480-3f188cda47bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.728122 4995 generic.go:334] "Generic (PLEG): container finished" podID="45aef819-2cda-443f-82ef-6e54a5be4261" containerID="8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.728184 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerDied","Data":"8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.742437 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.742542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.742604 4995 scope.go:117] "RemoveContainer" containerID="76f8ec744701d2466129fe4bf8df26122f8725276e4896b88abef624b66b4570" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.743381 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:11 crc kubenswrapper[4995]: E0126 23:36:11.743820 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.751202 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" event={"ID":"b11fff4a-980d-40c6-a480-3f188cda47bc","Type":"ContainerDied","Data":"cc127f351dfb19f4952ec734c11c273f7c077bf4bf86c7838ab1e91d22fa9c24"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.751251 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc127f351dfb19f4952ec734c11c273f7c077bf4bf86c7838ab1e91d22fa9c24" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.751333 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watchertest-account-delete-fw4t2" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.767945 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.767951 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768024 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" exitCode=2 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768037 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" exitCode=0 Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768054 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.768065 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201"} Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.849135 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970207 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970296 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970348 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970383 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970415 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.970452 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") pod \"45aef819-2cda-443f-82ef-6e54a5be4261\" (UID: \"45aef819-2cda-443f-82ef-6e54a5be4261\") " Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.971324 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs" (OuterVolumeSpecName: "logs") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.991283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc" (OuterVolumeSpecName: "kube-api-access-lmvdc") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "kube-api-access-lmvdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:11 crc kubenswrapper[4995]: I0126 23:36:11.998389 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.005414 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.029629 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data" (OuterVolumeSpecName: "config-data") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072500 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmvdc\" (UniqueName: \"kubernetes.io/projected/45aef819-2cda-443f-82ef-6e54a5be4261-kube-api-access-lmvdc\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072530 4995 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45aef819-2cda-443f-82ef-6e54a5be4261-logs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072541 4995 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072550 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.072558 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.075508 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls" (OuterVolumeSpecName: "cert-memcached-mtls") pod "45aef819-2cda-443f-82ef-6e54a5be4261" (UID: "45aef819-2cda-443f-82ef-6e54a5be4261"). InnerVolumeSpecName "cert-memcached-mtls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.173596 4995 reconciler_common.go:293] "Volume detached for volume \"cert-memcached-mtls\" (UniqueName: \"kubernetes.io/secret/45aef819-2cda-443f-82ef-6e54a5be4261-cert-memcached-mtls\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.528983 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" path="/var/lib/kubelet/pods/5820f715-2962-4319-b398-fa2a9975c5ea/volumes" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.618334 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.618666 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.625327 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-db-create-b6hk2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.664145 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.675141 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.690137 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-test-account-create-update-sq8zx"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.698147 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watchertest-account-delete-fw4t2"] Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.781053 4995 generic.go:334] "Generic (PLEG): container finished" podID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" exitCode=0 Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.781243 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b"} Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782811 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5","Type":"ContainerDied","Data":"2fa9a5a009be0079265a0dcc50a1983900de0b2654a05dca9c424e408f133efc"} Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782346 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782999 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783167 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783297 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783357 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.782889 4995 scope.go:117] "RemoveContainer" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.781349 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783552 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.783733 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") pod \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\" (UID: \"3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5\") " Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.784230 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.784564 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.789792 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" event={"ID":"45aef819-2cda-443f-82ef-6e54a5be4261","Type":"ContainerDied","Data":"49b775066686cf1211b0793974ac3eb4ebf22547fdd06b700494d55206364cad"} Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.789821 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt" (OuterVolumeSpecName: "kube-api-access-4mwmt") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "kube-api-access-4mwmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.789884 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/watcher-kuttl-decision-engine-0" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.791325 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts" (OuterVolumeSpecName: "scripts") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.812226 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.840831 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885478 4995 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885517 4995 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885529 4995 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885539 4995 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885548 4995 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.885558 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mwmt\" (UniqueName: \"kubernetes.io/projected/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-kube-api-access-4mwmt\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.896253 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.904925 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data" (OuterVolumeSpecName: "config-data") pod "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" (UID: "3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.963918 4995 scope.go:117] "RemoveContainer" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.991171 4995 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.991217 4995 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 23:36:12 crc kubenswrapper[4995]: I0126 23:36:12.993521 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:12.995859 4995 scope.go:117] "RemoveContainer" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.000046 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/watcher-kuttl-decision-engine-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.022674 4995 scope.go:117] "RemoveContainer" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.048254 4995 scope.go:117] "RemoveContainer" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.048857 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86\": container with ID starting with b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86 not found: ID does not exist" containerID="b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.048905 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86"} err="failed to get container status \"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86\": rpc error: code = NotFound desc = could not find container \"b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86\": container with ID starting with b63276e7b729bc75f8efc12bab0cecd547aaabe957ef6b22134ac3bbb5c58b86 not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.048928 4995 scope.go:117] "RemoveContainer" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.049374 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66\": container with ID starting with af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66 not found: ID does not exist" containerID="af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049409 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66"} err="failed to get container status \"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66\": rpc error: code = NotFound desc = could not find container \"af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66\": container with ID starting with af2a30496479f17473d812b08c4ad06d82c3e52f3210058a41360c5ccb6a6a66 not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049429 4995 scope.go:117] "RemoveContainer" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.049873 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b\": container with ID starting with b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b not found: ID does not exist" containerID="b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049913 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b"} err="failed to get container status \"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b\": rpc error: code = NotFound desc = could not find container \"b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b\": container with ID starting with b46b2efbbed6dfc1425026578ce8e02094afebc7c0b177c791cdf23b678b819b not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.049932 4995 scope.go:117] "RemoveContainer" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.050241 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201\": container with ID starting with f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201 not found: ID does not exist" containerID="f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.050269 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201"} err="failed to get container status \"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201\": rpc error: code = NotFound desc = could not find container \"f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201\": container with ID starting with f1f62d8b506f8a46511f9a34cfe271f16dc455f1c45591c88b5d2bb746de8201 not found: ID does not exist" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.050288 4995 scope.go:117] "RemoveContainer" containerID="8e0a18fefddbd7ab88304acf06d4c9193d40d1dcec642f6c4911e0a3644ff057" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.130852 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.140142 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.155861 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156163 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156178 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156197 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerName="mariadb-account-delete" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156204 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerName="mariadb-account-delete" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156214 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156220 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156228 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156234 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156241 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156246 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156258 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156263 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156272 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156277 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156291 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156297 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156306 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156312 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156319 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156324 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: E0126 23:36:13.156336 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156341 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156467 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156480 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="5820f715-2962-4319-b398-fa2a9975c5ea" containerName="watcher-applier" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156486 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="708f8ff2-4449-41ed-9436-28f9aae04852" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156508 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-central-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156517 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" containerName="watcher-decision-engine" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156526 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-kuttl-api-log" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156535 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="ceilometer-notification-agent" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156545 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec430d5-4541-494e-88bc-d6cb00ceb6fc" containerName="watcher-api" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156552 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" containerName="mariadb-account-delete" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156559 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="proxy-httpd" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.156569 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" containerName="sg-core" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.157852 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.159886 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-scripts" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.159950 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"cert-ceilometer-internal-svc" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.163721 4995 reflector.go:368] Caches populated for *v1.Secret from object-"watcher-kuttl-default"/"ceilometer-config-data" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.177292 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.302950 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-config-data\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.302995 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303022 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-log-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303080 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303131 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkfnz\" (UniqueName: \"kubernetes.io/projected/584b0f31-d1a1-4e26-b025-0927cfa15d55-kube-api-access-hkfnz\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303177 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-run-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303192 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.303230 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-scripts\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405093 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-run-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405152 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405208 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-scripts\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405250 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-config-data\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405274 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405298 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-log-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405325 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405351 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkfnz\" (UniqueName: \"kubernetes.io/projected/584b0f31-d1a1-4e26-b025-0927cfa15d55-kube-api-access-hkfnz\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.405991 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-log-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.406144 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/584b0f31-d1a1-4e26-b025-0927cfa15d55-run-httpd\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.410838 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.410997 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.412333 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-scripts\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.412753 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-config-data\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.414429 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/584b0f31-d1a1-4e26-b025-0927cfa15d55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.429564 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkfnz\" (UniqueName: \"kubernetes.io/projected/584b0f31-d1a1-4e26-b025-0927cfa15d55-kube-api-access-hkfnz\") pod \"ceilometer-0\" (UID: \"584b0f31-d1a1-4e26-b025-0927cfa15d55\") " pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.471612 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:13 crc kubenswrapper[4995]: I0126 23:36:13.973116 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["watcher-kuttl-default/ceilometer-0"] Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.533550 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e413561-4428-409c-9ca8-2eb61cbe1489" path="/var/lib/kubelet/pods/0e413561-4428-409c-9ca8-2eb61cbe1489/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.535530 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5" path="/var/lib/kubelet/pods/3c81c3bb-f215-40a7-8eb4-d8bcd3e606f5/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.537039 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45aef819-2cda-443f-82ef-6e54a5be4261" path="/var/lib/kubelet/pods/45aef819-2cda-443f-82ef-6e54a5be4261/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.541525 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949c118d-bfd2-4707-9091-abc3434a4fb6" path="/var/lib/kubelet/pods/949c118d-bfd2-4707-9091-abc3434a4fb6/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.542602 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11fff4a-980d-40c6-a480-3f188cda47bc" path="/var/lib/kubelet/pods/b11fff4a-980d-40c6-a480-3f188cda47bc/volumes" Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.834205 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"db19b27dee46892f3861870ad75ed19054b80c81f99e147ca9e8a976d05b3b07"} Jan 26 23:36:14 crc kubenswrapper[4995]: I0126 23:36:14.834589 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"223a1fed81a419ea9a382b7d6404d088da9819a6b7c0d5ee5c03b8bfebd899a0"} Jan 26 23:36:15 crc kubenswrapper[4995]: I0126 23:36:15.851542 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"82ea9f683e39325f563a54b3fae044dcd9b6b60bf5a831fceeecf8cd4bb30bd3"} Jan 26 23:36:16 crc kubenswrapper[4995]: I0126 23:36:16.860668 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"638a48b496b8d08324685d8e97f1eee226de3c651e90264bddf06600e16a3ea9"} Jan 26 23:36:17 crc kubenswrapper[4995]: I0126 23:36:17.875584 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="watcher-kuttl-default/ceilometer-0" event={"ID":"584b0f31-d1a1-4e26-b025-0927cfa15d55","Type":"ContainerStarted","Data":"5825b80d35d17bc2af4d98024e768c30c2494b0f5746cebe5e66dc82d99f5951"} Jan 26 23:36:17 crc kubenswrapper[4995]: I0126 23:36:17.877405 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:17 crc kubenswrapper[4995]: I0126 23:36:17.920454 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="watcher-kuttl-default/ceilometer-0" podStartSLOduration=1.882788559 podStartE2EDuration="4.920428826s" podCreationTimestamp="2026-01-26 23:36:13 +0000 UTC" firstStartedPulling="2026-01-26 23:36:13.982376057 +0000 UTC m=+1678.147083522" lastFinishedPulling="2026-01-26 23:36:17.020016324 +0000 UTC m=+1681.184723789" observedRunningTime="2026-01-26 23:36:17.902355912 +0000 UTC m=+1682.067063417" watchObservedRunningTime="2026-01-26 23:36:17.920428826 +0000 UTC m=+1682.085136321" Jan 26 23:36:18 crc kubenswrapper[4995]: I0126 23:36:18.985678 4995 scope.go:117] "RemoveContainer" containerID="628857604cce928f818ebc089bc87e2ce8ba9c786cadc542c50f09fdce7e0220" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.024182 4995 scope.go:117] "RemoveContainer" containerID="fe72b36fbe062455d8a290e6c1bd9e0b00b8cb2f1b8b0be2c5f79be8315462a9" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.077378 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.088857 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/root-account-create-update-tkjsp"] Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.109353 4995 scope.go:117] "RemoveContainer" containerID="8fd006c327ce56252705ed20528a00dcfa084ed04bd5e467803791a1f4ae0733" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.150723 4995 scope.go:117] "RemoveContainer" containerID="5881a006fd0e8b545fdd02ea477aabaa591905ac84b4483905c5ea65a3a15279" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.206783 4995 scope.go:117] "RemoveContainer" containerID="9bcf59f8068a58a5908f7f9f490fcde236bda08e654b64f1d471d1bef1b45cfc" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.245166 4995 scope.go:117] "RemoveContainer" containerID="b04176a0e27de47ec9992ca7aa97e0c6c4c8aae35383f6b313a755fda54d8e47" Jan 26 23:36:19 crc kubenswrapper[4995]: I0126 23:36:19.286877 4995 scope.go:117] "RemoveContainer" containerID="54026a5c7938c99685025eb0d6f422b9c6952be4668651d7bb950ada4b54c826" Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.036745 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.049854 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.057940 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-ea46-account-create-update-c7cfb"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.065093 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-create-4fsqw"] Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.536695 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c339608-1d36-448f-b3cd-00252341cf0d" path="/var/lib/kubelet/pods/2c339608-1d36-448f-b3cd-00252341cf0d/volumes" Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.537951 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513f0b17-1707-4c0c-bc81-d7ead6a553c8" path="/var/lib/kubelet/pods/513f0b17-1707-4c0c-bc81-d7ead6a553c8/volumes" Jan 26 23:36:20 crc kubenswrapper[4995]: I0126 23:36:20.539256 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94023397-a2e2-42cb-8469-003bc383aeaa" path="/var/lib/kubelet/pods/94023397-a2e2-42cb-8469-003bc383aeaa/volumes" Jan 26 23:36:23 crc kubenswrapper[4995]: I0126 23:36:23.516883 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:23 crc kubenswrapper[4995]: E0126 23:36:23.517569 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:38 crc kubenswrapper[4995]: I0126 23:36:38.516944 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:38 crc kubenswrapper[4995]: E0126 23:36:38.517652 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:43 crc kubenswrapper[4995]: I0126 23:36:43.482328 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="watcher-kuttl-default/ceilometer-0" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.661222 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.663174 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.666278 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kpk7x"/"default-dockercfg-bchvm" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.666483 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kpk7x"/"openshift-service-ca.crt" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.666617 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kpk7x"/"kube-root-ca.crt" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.684834 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.693206 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.693305 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.794382 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.794442 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.794882 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.820726 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"must-gather-7f9z4\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:46 crc kubenswrapper[4995]: I0126 23:36:46.982352 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:36:47 crc kubenswrapper[4995]: I0126 23:36:47.453667 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:36:48 crc kubenswrapper[4995]: I0126 23:36:48.197829 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerStarted","Data":"b8f762046c008e7fa5e5eebfcdf898132741a9d313df87b5ba572c6a8bc38898"} Jan 26 23:36:53 crc kubenswrapper[4995]: I0126 23:36:53.517448 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:36:53 crc kubenswrapper[4995]: E0126 23:36:53.518321 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:36:54 crc kubenswrapper[4995]: I0126 23:36:54.257372 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerStarted","Data":"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025"} Jan 26 23:36:54 crc kubenswrapper[4995]: I0126 23:36:54.257709 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerStarted","Data":"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb"} Jan 26 23:36:54 crc kubenswrapper[4995]: I0126 23:36:54.275312 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" podStartSLOduration=1.995119361 podStartE2EDuration="8.27529124s" podCreationTimestamp="2026-01-26 23:36:46 +0000 UTC" firstStartedPulling="2026-01-26 23:36:47.462851322 +0000 UTC m=+1711.627558787" lastFinishedPulling="2026-01-26 23:36:53.743023191 +0000 UTC m=+1717.907730666" observedRunningTime="2026-01-26 23:36:54.272445238 +0000 UTC m=+1718.437152713" watchObservedRunningTime="2026-01-26 23:36:54.27529124 +0000 UTC m=+1718.439998705" Jan 26 23:37:04 crc kubenswrapper[4995]: I0126 23:37:04.052277 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:37:04 crc kubenswrapper[4995]: I0126 23:37:04.058345 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-db-sync-27jdj"] Jan 26 23:37:04 crc kubenswrapper[4995]: I0126 23:37:04.526101 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad6fb114-59e8-443d-acd9-7241b8ee783c" path="/var/lib/kubelet/pods/ad6fb114-59e8-443d-acd9-7241b8ee783c/volumes" Jan 26 23:37:05 crc kubenswrapper[4995]: I0126 23:37:05.517526 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:05 crc kubenswrapper[4995]: E0126 23:37:05.518043 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:16 crc kubenswrapper[4995]: I0126 23:37:16.524511 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:16 crc kubenswrapper[4995]: E0126 23:37:16.525552 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.592876 4995 scope.go:117] "RemoveContainer" containerID="a3d0cf0c24bcaec0a584ae1322d81bc2cc97c571dfb1efe06bea1c6a8030ba2d" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.628915 4995 scope.go:117] "RemoveContainer" containerID="7e8cf2c919653011e8c269ce173fbce08dab23f7ee1814809bea2eec540dfb95" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.697353 4995 scope.go:117] "RemoveContainer" containerID="87d87779d4c3502bc67575e7abc513b3a091bacd50d75b12711b8a101c37d329" Jan 26 23:37:19 crc kubenswrapper[4995]: I0126 23:37:19.724285 4995 scope.go:117] "RemoveContainer" containerID="02cef367fb01441bf0b8a9914fe6804f776043582c13fe0f23584fe155ab9938" Jan 26 23:37:27 crc kubenswrapper[4995]: I0126 23:37:27.519507 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:27 crc kubenswrapper[4995]: E0126 23:37:27.520456 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:39 crc kubenswrapper[4995]: I0126 23:37:39.517863 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:39 crc kubenswrapper[4995]: E0126 23:37:39.519071 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:37:53 crc kubenswrapper[4995]: I0126 23:37:53.516770 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:37:53 crc kubenswrapper[4995]: E0126 23:37:53.517570 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.412834 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/util/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.618295 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/util/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.621566 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/pull/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.667995 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/pull/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.799466 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/util/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.809747 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/pull/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.851396 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_1547be183bac683a7d76767711496d143f6798f207d90c3a09b1211b97vs2xc_1cbffe6c-1d98-4769-8f02-7a966a63ef38/extract/0.log" Jan 26 23:38:07 crc kubenswrapper[4995]: I0126 23:38:07.979234 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6987f66698-x2fg8_c5dd6b1a-1515-4ad6-b89e-0c7253a71281/manager/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.076164 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-655bf9cfbb-pzzq9_70dc0d96-2ba1-487e-8ffc-a98725e002c4/manager/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.202828 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/util/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.415823 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/util/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.465506 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/pull/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.468182 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/pull/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.518181 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:08 crc kubenswrapper[4995]: E0126 23:38:08.518368 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.611985 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/pull/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.650364 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/util/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.701745 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d41be521da91d0b4bc60abc3583b930553524cb12a9752b986ace9a84cszzpn_5c23b438-d384-46e6-8c88-6703c70fccea/extract/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.882760 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-77554cdc5c-kgv2f_90ae2b4f-43e9-4a37-abc5-d90e958e540b/manager/0.log" Jan 26 23:38:08 crc kubenswrapper[4995]: I0126 23:38:08.927717 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-67dd55ff59-gdvdp_4c1f5873-cf2b-4fd3-a83e-97611d3ee0e6/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.084884 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-954b94f75-7q5kj_e29f1042-97e4-430c-a262-53ab3cca40d9/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.130834 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-r7mgm_bd8c5b8d-f13d-48a8-82ff-9928fb5b5b5e/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.350676 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-768b776ffb-6gtf9_555394ee-9ad5-417f-9698-646ba1ddc5f2/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.431494 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d75bc88d5-n9dc8_3a2f8d86-155b-476b-86c4-fda3eb595fc9/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.618603 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-849fcfbb6b-w2gfg_fd2183e6-a9e4-44b8-861f-9a545aac1c12/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.651446 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55f684fd56-gzjxj_235cf5b2-2094-4345-bf37-edbcb2e5e48f/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.805883 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-rtnxh_0d39c5fc-e526-46e8-8773-6bf87e938b06/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.869176 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7ffd8d76d4-p47jp_03047106-c820-43c2-bee1-c8b1fb3a0a0c/manager/0.log" Jan 26 23:38:09 crc kubenswrapper[4995]: I0126 23:38:09.992717 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7f54b7d6d4-cf7gh_4e9b965f-6060-43e7-aa1c-b73472075bae/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.044496 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-756f86fc74-7s666_ce22ba19-581c-4f75-9bd6-4de0538779a2/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.189666 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b85484lj5_cfbd9d32-25ae-4369-8e16-ce174c0802dc/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.419291 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-z9fdb_ca183057-4337-4dfb-a5ec-e8945fe74cca/registry-server/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.573652 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-58b6ccbf98-85h8w_03478ac9-bd6b-4726-86b4-cd29045b6dc0/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.816267 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-z899w_1b364747-4f4c-4431-becf-0f2b30bc9d20/manager/0.log" Jan 26 23:38:10 crc kubenswrapper[4995]: I0126 23:38:10.980341 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-5zhml_931ac40b-6695-41c7-9d8f-c8eefca6e587/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.035217 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-dk2dl_a0641fd3-88a7-4fb2-93f9-ffce84aadef2/operator/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.136857 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-b4kzb_aba99191-8a3a-47dc-8dca-136de682a567/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.339978 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-kjmpf_b60b13f0-97c0-42b9-85fd-2a51218c9ac1/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.390420 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-799bc87c89-bmdgt_fd5d672d-1c27-4782-bbf3-c6d936a8c9bb/manager/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.647968 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-index-k8w76_fea9da97-72c6-4b3a-a479-1566d93b3a22/registry-server/0.log" Jan 26 23:38:11 crc kubenswrapper[4995]: I0126 23:38:11.735687 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-69796cd4f7-2jmll_001f4541-5731-4423-9cf7-f2c339b975b1/manager/0.log" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.822414 4995 scope.go:117] "RemoveContainer" containerID="92cc26c82a9b23a9721c60030809c14c060714d70c702de958b8d81f8d16479b" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.845129 4995 scope.go:117] "RemoveContainer" containerID="42bdbf79e7939fb4f6bd922600909eb049e24579c79123df69d4d9b5938f3988" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.879154 4995 scope.go:117] "RemoveContainer" containerID="0bebf82f7d2ff6fccacc8ac1b19e5ae9a0ca59b2e9b344a0b5356ce530d49427" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.935574 4995 scope.go:117] "RemoveContainer" containerID="9c92253ce611dea0df9e21427e5984e7db9bccf73045bb24769fa3dbad187a39" Jan 26 23:38:19 crc kubenswrapper[4995]: I0126 23:38:19.954889 4995 scope.go:117] "RemoveContainer" containerID="6252efa89a6bded11f55db4306e63c08033e933d2981726c47ebad7505a562dc" Jan 26 23:38:20 crc kubenswrapper[4995]: I0126 23:38:20.517012 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:20 crc kubenswrapper[4995]: E0126 23:38:20.517350 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:33 crc kubenswrapper[4995]: I0126 23:38:33.534554 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-s4cw2_8e46628e-0c8d-4128-b57c-ad324ff9f9bc/control-plane-machine-set-operator/0.log" Jan 26 23:38:33 crc kubenswrapper[4995]: I0126 23:38:33.731923 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-klb9g_49ad869c-a391-4d0b-99fa-74e9d7ef4e87/kube-rbac-proxy/0.log" Jan 26 23:38:33 crc kubenswrapper[4995]: I0126 23:38:33.789695 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-klb9g_49ad869c-a391-4d0b-99fa-74e9d7ef4e87/machine-api-operator/0.log" Jan 26 23:38:35 crc kubenswrapper[4995]: I0126 23:38:35.516888 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:35 crc kubenswrapper[4995]: E0126 23:38:35.517156 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:47 crc kubenswrapper[4995]: I0126 23:38:47.517779 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:38:47 crc kubenswrapper[4995]: E0126 23:38:47.518597 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:38:48 crc kubenswrapper[4995]: I0126 23:38:48.883179 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-4g78v_0ea05f4b-1373-4e08-9d78-e214b84cdc79/cert-manager-controller/0.log" Jan 26 23:38:49 crc kubenswrapper[4995]: I0126 23:38:49.104011 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-hjbt4_10b23efd-9250-469e-8bce-4f31c05d1470/cert-manager-cainjector/0.log" Jan 26 23:38:49 crc kubenswrapper[4995]: I0126 23:38:49.209167 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-g88s9_5cf25cae-f1af-44e4-a613-be45044cf998/cert-manager-webhook/0.log" Jan 26 23:39:02 crc kubenswrapper[4995]: I0126 23:39:02.517973 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:02 crc kubenswrapper[4995]: E0126 23:39:02.518961 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.078843 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-8rf6d_fa9c3198-27d3-4733-8c9c-ccc6f0168f0d/nmstate-console-plugin/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.261346 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-4nqd8_8bd5c3be-b641-437a-9aad-bcd9a7dd2c56/nmstate-handler/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.285421 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-75scl_49297381-c6bb-4ede-9f80-38ee237f7a3e/kube-rbac-proxy/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.380889 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-75scl_49297381-c6bb-4ede-9f80-38ee237f7a3e/nmstate-metrics/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.465656 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-fnp66_1f224cbd-cdf6-474c-bcc6-a37358dcd4f5/nmstate-operator/0.log" Jan 26 23:39:05 crc kubenswrapper[4995]: I0126 23:39:05.592023 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jkj8f_4adb027e-2869-4cbc-bdb7-63ae41659c28/nmstate-webhook/0.log" Jan 26 23:39:15 crc kubenswrapper[4995]: I0126 23:39:15.517943 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:15 crc kubenswrapper[4995]: E0126 23:39:15.518811 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.062590 4995 scope.go:117] "RemoveContainer" containerID="4f43eaafefb61a73772d9d42e692be3b8d70484a9a76ac96db06e9b550ed122a" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.106893 4995 scope.go:117] "RemoveContainer" containerID="b4b16b6f1cc961085f1980b33bb732c8fc0fbcf31eda7643a2f07d72636e35f6" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.138326 4995 scope.go:117] "RemoveContainer" containerID="277efe3193b009f2b06839712b4dacd62f8313f279f58b3eccc7197afb22175e" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.166787 4995 scope.go:117] "RemoveContainer" containerID="fe935962b3dd798431c17ed02d94a0c871a317035a5bd78cc9d0e159f906c4a8" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.226888 4995 scope.go:117] "RemoveContainer" containerID="7e3ee0bb83f474f59b73fb0e9420f6ea26d6576fd1f9c21251e039a52f0471bc" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.253829 4995 scope.go:117] "RemoveContainer" containerID="1866d568d45be33fe5efec6245bd56a7ca5c85d09dddb97e98e3df586623483f" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.286117 4995 scope.go:117] "RemoveContainer" containerID="99eb0b14efb02af86f6c14feef7a145f682f560ca0fbfcaebf933cf15112c438" Jan 26 23:39:20 crc kubenswrapper[4995]: I0126 23:39:20.325285 4995 scope.go:117] "RemoveContainer" containerID="cf3bdba0bcbd9d81e57b55b762961560e4562c68a0aaacec99cefb4e736c2028" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.082883 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zfmp4_a1c71758-f818-4fd6-a985-4aa33488e96c/prometheus-operator/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.295664 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_684ae2c3-240e-4b73-9aaa-391ad824f47d/prometheus-operator-admission-webhook/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.359908 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de/prometheus-operator-admission-webhook/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.514746 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-g4lwc_549a554b-0ef6-4d8b-b2cf-4445474572d2/operator/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.631959 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-k62mg_403406f0-ed75-4c4d-878b-a21885f105d2/observability-ui-dashboards/0.log" Jan 26 23:39:22 crc kubenswrapper[4995]: I0126 23:39:22.703420 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ngw26_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8/perses-operator/0.log" Jan 26 23:39:26 crc kubenswrapper[4995]: I0126 23:39:26.522777 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:26 crc kubenswrapper[4995]: E0126 23:39:26.523626 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.517452 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:37 crc kubenswrapper[4995]: E0126 23:39:37.517998 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.752800 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.754297 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.773057 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.834575 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.834641 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.834673 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936242 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936303 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936331 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.936806 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.937205 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:37 crc kubenswrapper[4995]: I0126 23:39:37.957464 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"community-operators-bzgkt\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.076669 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.369554 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.667448 4995 generic.go:334] "Generic (PLEG): container finished" podID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" exitCode=0 Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.667493 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b"} Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.667519 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerStarted","Data":"835bfde9b7a58b9c1c6bcda08bf7904147551057930e3a6bacaeb397a7228637"} Jan 26 23:39:38 crc kubenswrapper[4995]: I0126 23:39:38.669322 4995 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.531057 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hp8cv_fd8ee636-b6e8-4caf-bf47-8356cf3974a5/kube-rbac-proxy/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.662375 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-hp8cv_fd8ee636-b6e8-4caf-bf47-8356cf3974a5/controller/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.676723 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerStarted","Data":"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488"} Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.766248 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.921344 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.959122 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.959942 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:39 crc kubenswrapper[4995]: I0126 23:39:39.985312 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.299223 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.349558 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.500291 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.530273 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.685533 4995 generic.go:334] "Generic (PLEG): container finished" podID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" exitCode=0 Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.685572 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488"} Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.788903 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-frr-files/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.848739 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-reloader/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.916897 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/cp-metrics/0.log" Jan 26 23:39:40 crc kubenswrapper[4995]: I0126 23:39:40.949150 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/controller/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.153060 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/frr-metrics/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.163559 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/kube-rbac-proxy-frr/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.188223 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/kube-rbac-proxy/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.394201 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/reloader/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.438261 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bqkf9_d71dd2bc-e8c9-4a37-9096-35a1f19333f8/frr-k8s-webhook-server/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.648397 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-9666f9f76-p9s9z_b70f3de5-9e6d-465f-b6c3-b9eb12eba2d9/manager/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.708456 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerStarted","Data":"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99"} Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.731525 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bzgkt" podStartSLOduration=2.276751068 podStartE2EDuration="4.731503774s" podCreationTimestamp="2026-01-26 23:39:37 +0000 UTC" firstStartedPulling="2026-01-26 23:39:38.669131128 +0000 UTC m=+1882.833838583" lastFinishedPulling="2026-01-26 23:39:41.123883824 +0000 UTC m=+1885.288591289" observedRunningTime="2026-01-26 23:39:41.728644752 +0000 UTC m=+1885.893352217" watchObservedRunningTime="2026-01-26 23:39:41.731503774 +0000 UTC m=+1885.896211239" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.827927 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-lt5dg_11187758-87a2-4879-8421-5d9cdc4fd8bd/frr/0.log" Jan 26 23:39:41 crc kubenswrapper[4995]: I0126 23:39:41.938417 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79fc76bd5c-vctw9_191e8757-940a-4e3e-a884-f5935f9f8201/webhook-server/0.log" Jan 26 23:39:42 crc kubenswrapper[4995]: I0126 23:39:42.037851 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jlkxq_4768de9d-be12-4b0b-9bd1-03f127a1a557/kube-rbac-proxy/0.log" Jan 26 23:39:42 crc kubenswrapper[4995]: I0126 23:39:42.252542 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-jlkxq_4768de9d-be12-4b0b-9bd1-03f127a1a557/speaker/0.log" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.077741 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.078173 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.130110 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:48 crc kubenswrapper[4995]: I0126 23:39:48.826608 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:51 crc kubenswrapper[4995]: I0126 23:39:51.517251 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:39:51 crc kubenswrapper[4995]: E0126 23:39:51.517768 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:39:51 crc kubenswrapper[4995]: I0126 23:39:51.746637 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:51 crc kubenswrapper[4995]: I0126 23:39:51.747034 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bzgkt" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" containerID="cri-o://d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" gracePeriod=2 Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.210332 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.393151 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") pod \"9f7c2892-e695-4b52-87c6-e32d1495bf87\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.393213 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") pod \"9f7c2892-e695-4b52-87c6-e32d1495bf87\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.393313 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") pod \"9f7c2892-e695-4b52-87c6-e32d1495bf87\" (UID: \"9f7c2892-e695-4b52-87c6-e32d1495bf87\") " Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.394283 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities" (OuterVolumeSpecName: "utilities") pod "9f7c2892-e695-4b52-87c6-e32d1495bf87" (UID: "9f7c2892-e695-4b52-87c6-e32d1495bf87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.401092 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv" (OuterVolumeSpecName: "kube-api-access-n8dxv") pod "9f7c2892-e695-4b52-87c6-e32d1495bf87" (UID: "9f7c2892-e695-4b52-87c6-e32d1495bf87"). InnerVolumeSpecName "kube-api-access-n8dxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.452973 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f7c2892-e695-4b52-87c6-e32d1495bf87" (UID: "9f7c2892-e695-4b52-87c6-e32d1495bf87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.495646 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8dxv\" (UniqueName: \"kubernetes.io/projected/9f7c2892-e695-4b52-87c6-e32d1495bf87-kube-api-access-n8dxv\") on node \"crc\" DevicePath \"\"" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.495692 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.495701 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f7c2892-e695-4b52-87c6-e32d1495bf87-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820603 4995 generic.go:334] "Generic (PLEG): container finished" podID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" exitCode=0 Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820692 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99"} Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820742 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bzgkt" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820801 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bzgkt" event={"ID":"9f7c2892-e695-4b52-87c6-e32d1495bf87","Type":"ContainerDied","Data":"835bfde9b7a58b9c1c6bcda08bf7904147551057930e3a6bacaeb397a7228637"} Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.820838 4995 scope.go:117] "RemoveContainer" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.846528 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.854178 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bzgkt"] Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.857073 4995 scope.go:117] "RemoveContainer" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.880258 4995 scope.go:117] "RemoveContainer" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919011 4995 scope.go:117] "RemoveContainer" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" Jan 26 23:39:52 crc kubenswrapper[4995]: E0126 23:39:52.919482 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99\": container with ID starting with d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99 not found: ID does not exist" containerID="d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919518 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99"} err="failed to get container status \"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99\": rpc error: code = NotFound desc = could not find container \"d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99\": container with ID starting with d49ca64661d9bbb28c8eea1a21b282b4bea36e46a15218a667c16cd1e52c2d99 not found: ID does not exist" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919544 4995 scope.go:117] "RemoveContainer" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" Jan 26 23:39:52 crc kubenswrapper[4995]: E0126 23:39:52.919879 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488\": container with ID starting with 7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488 not found: ID does not exist" containerID="7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919899 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488"} err="failed to get container status \"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488\": rpc error: code = NotFound desc = could not find container \"7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488\": container with ID starting with 7757cb18d2fb73c41c37f2aeb67eb40414e2ca35d33931687d8b2b78e65ef488 not found: ID does not exist" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.919915 4995 scope.go:117] "RemoveContainer" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" Jan 26 23:39:52 crc kubenswrapper[4995]: E0126 23:39:52.920228 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b\": container with ID starting with d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b not found: ID does not exist" containerID="d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b" Jan 26 23:39:52 crc kubenswrapper[4995]: I0126 23:39:52.920259 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b"} err="failed to get container status \"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b\": rpc error: code = NotFound desc = could not find container \"d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b\": container with ID starting with d49747af92b67f8710653334a077a7bc097a158cac89a01aff612887e251216b not found: ID does not exist" Jan 26 23:39:54 crc kubenswrapper[4995]: I0126 23:39:54.533813 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" path="/var/lib/kubelet/pods/9f7c2892-e695-4b52-87c6-e32d1495bf87/volumes" Jan 26 23:40:02 crc kubenswrapper[4995]: I0126 23:40:02.518604 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:02 crc kubenswrapper[4995]: E0126 23:40:02.519523 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.261991 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/init-config-reloader/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.472287 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/init-config-reloader/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.558296 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/alertmanager/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.652295 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_alertmanager-metric-storage-0_5083beb6-ae53-44e5-a82c-872943996b7b/config-reloader/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.788090 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/ceilometer-notification-agent/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.836289 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/ceilometer-central-agent/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.894396 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/sg-core/0.log" Jan 26 23:40:06 crc kubenswrapper[4995]: I0126 23:40:06.895124 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_ceilometer-0_584b0f31-d1a1-4e26-b025-0927cfa15d55/proxy-httpd/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.103212 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-bootstrap-sf9jb_c5595470-f70f-4bc9-9012-b939a6b2fc0f/keystone-bootstrap/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.103730 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_keystone-984bfcd89-8d4rw_257ee213-d2fa-4d94-9b26-0c62b5411e44/keystone-api/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.281622 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_kube-state-metrics-0_86cef714-2c2e-4825-bab7-c653df90a3c2/kube-state-metrics/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.665513 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_5da7bc3d-c0c7-4935-ba58-c64da8c943b0/mysql-bootstrap/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.901657 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_5da7bc3d-c0c7-4935-ba58-c64da8c943b0/galera/0.log" Jan 26 23:40:07 crc kubenswrapper[4995]: I0126 23:40:07.939281 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstack-galera-0_5da7bc3d-c0c7-4935-ba58-c64da8c943b0/mysql-bootstrap/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.089602 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_openstackclient_f27553d1-06f5-4e72-9d14-714d48fbd854/openstackclient/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.260493 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/init-config-reloader/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.452082 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/init-config-reloader/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.496474 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/config-reloader/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.501269 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/prometheus/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.681713 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_prometheus-metric-storage-0_331b761a-fa99-405f-aedf-a94cb456cdfc/thanos-sidecar/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.686494 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_54ccebac-5075-4c00-a1e9-ebb66b43876e/setup-container/0.log" Jan 26 23:40:08 crc kubenswrapper[4995]: I0126 23:40:08.955570 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_54ccebac-5075-4c00-a1e9-ebb66b43876e/setup-container/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.046282 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-notifications-server-0_54ccebac-5075-4c00-a1e9-ebb66b43876e/rabbitmq/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.172064 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_4b909799-2071-4d68-ab55-d29f6e224bf2/setup-container/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.439583 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_4b909799-2071-4d68-ab55-d29f6e224bf2/setup-container/0.log" Jan 26 23:40:09 crc kubenswrapper[4995]: I0126 23:40:09.557247 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_rabbitmq-server-0_4b909799-2071-4d68-ab55-d29f6e224bf2/rabbitmq/0.log" Jan 26 23:40:14 crc kubenswrapper[4995]: I0126 23:40:14.518222 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:14 crc kubenswrapper[4995]: E0126 23:40:14.518917 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:16 crc kubenswrapper[4995]: I0126 23:40:16.151994 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/watcher-kuttl-default_memcached-0_9e495843-c3b4-4d2e-9c40-b11f0d95b5f9/memcached/0.log" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.468802 4995 scope.go:117] "RemoveContainer" containerID="2f4a4987d76b545f02a7d8c08b9fd9eca391865fce1211a494dbae9aeadf38f3" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.535833 4995 scope.go:117] "RemoveContainer" containerID="4af14df6baf5e2d7f5d921b037ff739c3922a94531b7d54b66151b9b3794fdee" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.560049 4995 scope.go:117] "RemoveContainer" containerID="145cc5b8f4d1b5f2f7c477df014248adbc6dd21d5028dfe55f19a4cb11fa10b1" Jan 26 23:40:20 crc kubenswrapper[4995]: I0126 23:40:20.597850 4995 scope.go:117] "RemoveContainer" containerID="bbc420fd12fe1d211845fe7f68211386fea3f13c2e6223073fc5536f18ea16a2" Jan 26 23:40:25 crc kubenswrapper[4995]: I0126 23:40:25.517275 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:25 crc kubenswrapper[4995]: E0126 23:40:25.518211 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:27 crc kubenswrapper[4995]: I0126 23:40:27.792532 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/util/0.log" Jan 26 23:40:27 crc kubenswrapper[4995]: I0126 23:40:27.945230 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.036945 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.039905 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.173010 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.197772 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.198303 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahxks8_a2fc70c8-babd-496e-8d1c-acd82bb98901/extract/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.368732 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.505666 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.506917 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.545557 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.727073 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/pull/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.747116 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/util/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.799191 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcnkz7m_a59475a0-c56e-4d7d-a062-2a9b7188a601/extract/0.log" Jan 26 23:40:28 crc kubenswrapper[4995]: I0126 23:40:28.905032 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.100192 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.143349 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.150543 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.324559 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.359979 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/extract/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.362907 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7135bqh6_c72f27ba-28b4-41be-a2e3-894496ce06fb/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.507467 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.724658 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.729302 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.735776 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/pull/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.964465 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/util/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.979093 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/extract/0.log" Jan 26 23:40:29 crc kubenswrapper[4995]: I0126 23:40:29.999240 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f086rcqh_388e02fc-e28d-4d4a-94ec-464eb7573a8d/pull/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.184126 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.363637 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.370711 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-content/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.404292 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-content/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.596868 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-content/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.625025 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.874293 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-utilities/0.log" Jan 26 23:40:30 crc kubenswrapper[4995]: I0126 23:40:30.883282 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-wfnlj_f956bbfb-557b-4b78-b2eb-141bdd1ca81f/registry-server/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.019701 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.112892 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.112893 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.236814 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.305223 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.347419 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vsjb7_d781053b-fcf3-44a7-812a-8af6c2c1ab07/marketplace-operator/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.739994 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6tk5_0d1ac969-80ec-4450-9f6d-0cca599d2185/registry-server/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.829555 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.955264 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-content/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.967327 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-utilities/0.log" Jan 26 23:40:31 crc kubenswrapper[4995]: I0126 23:40:31.972775 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.128717 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.133832 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.216323 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-56ct7_7af9b1ce-9df1-4d94-ae24-e8ff6cd5edb8/registry-server/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.220802 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.349530 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.384398 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.400652 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.590051 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-content/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.615562 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/extract-utilities/0.log" Jan 26 23:40:32 crc kubenswrapper[4995]: I0126 23:40:32.833294 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4fw5x_269f6fbd-326f-45d1-a1a6-ea5da5b7daff/registry-server/0.log" Jan 26 23:40:33 crc kubenswrapper[4995]: I0126 23:40:33.038447 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:40:33 crc kubenswrapper[4995]: I0126 23:40:33.043647 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["watcher-kuttl-default/keystone-bootstrap-sf9jb"] Jan 26 23:40:34 crc kubenswrapper[4995]: I0126 23:40:34.529476 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5595470-f70f-4bc9-9012-b939a6b2fc0f" path="/var/lib/kubelet/pods/c5595470-f70f-4bc9-9012-b939a6b2fc0f/volumes" Jan 26 23:40:37 crc kubenswrapper[4995]: I0126 23:40:37.516860 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:37 crc kubenswrapper[4995]: E0126 23:40:37.517401 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.326475 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-zfmp4_a1c71758-f818-4fd6-a985-4aa33488e96c/prometheus-operator/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.360053 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-xrj5r_684ae2c3-240e-4b73-9aaa-391ad824f47d/prometheus-operator-admission-webhook/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.363535 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6cd9cb56c9-z6s7g_4f936e96-9a6c-4e10-97a1-ccbf7e8c14de/prometheus-operator-admission-webhook/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.568590 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ngw26_f8710ec9-2fc5-400b-83d0-0411f6e7fdc8/perses-operator/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.610234 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-g4lwc_549a554b-0ef6-4d8b-b2cf-4445474572d2/operator/0.log" Jan 26 23:40:46 crc kubenswrapper[4995]: I0126 23:40:46.615214 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-k62mg_403406f0-ed75-4c4d-878b-a21885f105d2/observability-ui-dashboards/0.log" Jan 26 23:40:51 crc kubenswrapper[4995]: I0126 23:40:51.517456 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:40:51 crc kubenswrapper[4995]: E0126 23:40:51.518489 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:41:02 crc kubenswrapper[4995]: I0126 23:41:02.517681 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:41:02 crc kubenswrapper[4995]: E0126 23:41:02.518495 4995 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-sj7pr_openshift-machine-config-operator(09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4)\"" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" Jan 26 23:41:13 crc kubenswrapper[4995]: I0126 23:41:13.517668 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:41:14 crc kubenswrapper[4995]: I0126 23:41:14.575283 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061"} Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.748173 4995 scope.go:117] "RemoveContainer" containerID="404080cef7718114d3ef40681ba2896d4b0b7f3fac87f1f21efcf7b7105e0285" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.777728 4995 scope.go:117] "RemoveContainer" containerID="cd3358a0ea8ceaa10989cd97ffca9dfefbbb82795be31ea1a44850cfa67b5055" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.830689 4995 scope.go:117] "RemoveContainer" containerID="eaa76726f01faaa0a08761d9ea0a24bad284c08bc58814b2904115408ab201e0" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.873336 4995 scope.go:117] "RemoveContainer" containerID="05941be74554d8c96582833cc04e5255893bcfe29812230a633a9595ed2b3e52" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.926218 4995 scope.go:117] "RemoveContainer" containerID="25c8d3c2991d69a5a3326fb481b95cc7b754074c8cad3e82a6126d4dff723e1b" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.955650 4995 scope.go:117] "RemoveContainer" containerID="30656b19d1917eb3dd412a07deb00ccc5461cf48e1c2a15363c20a1572d6ee9c" Jan 26 23:41:20 crc kubenswrapper[4995]: I0126 23:41:20.988348 4995 scope.go:117] "RemoveContainer" containerID="bf5164b7995961e784d793950a89a89942f6f93bc6fda24c41d104c6d00ebc5b" Jan 26 23:41:21 crc kubenswrapper[4995]: I0126 23:41:21.020851 4995 scope.go:117] "RemoveContainer" containerID="19015ac8e66cfd6b595e7c7c92f0a44c4fa7c488406dc0b9e0bf719041c6fbf3" Jan 26 23:41:21 crc kubenswrapper[4995]: I0126 23:41:21.051278 4995 scope.go:117] "RemoveContainer" containerID="27d7920d9fd33f11ed78c7916026f8f12eca21c60e182186baff705d11e4cf74" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.143354 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:41:47 crc kubenswrapper[4995]: E0126 23:41:47.144343 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-utilities" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144363 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-utilities" Jan 26 23:41:47 crc kubenswrapper[4995]: E0126 23:41:47.144380 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144391 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" Jan 26 23:41:47 crc kubenswrapper[4995]: E0126 23:41:47.144425 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-content" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144436 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="extract-content" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.144658 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7c2892-e695-4b52-87c6-e32d1495bf87" containerName="registry-server" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.146540 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.167139 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.203768 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.203956 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.204707 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.306452 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.306544 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.306594 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.307279 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.307284 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.333059 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"redhat-operators-gljh7\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.477506 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:47 crc kubenswrapper[4995]: I0126 23:41:47.971903 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:41:48 crc kubenswrapper[4995]: I0126 23:41:48.910762 4995 generic.go:334] "Generic (PLEG): container finished" podID="c3de82ef-2dae-4204-9b32-878af19e4055" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" exitCode=0 Jan 26 23:41:48 crc kubenswrapper[4995]: I0126 23:41:48.910832 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3"} Jan 26 23:41:48 crc kubenswrapper[4995]: I0126 23:41:48.910872 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerStarted","Data":"f2a57a01c6fd27a753d1502c47deb5d11a61ac11fc497d91ee3965247570c99b"} Jan 26 23:41:49 crc kubenswrapper[4995]: I0126 23:41:49.937001 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerStarted","Data":"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb"} Jan 26 23:41:50 crc kubenswrapper[4995]: I0126 23:41:50.962446 4995 generic.go:334] "Generic (PLEG): container finished" podID="c3de82ef-2dae-4204-9b32-878af19e4055" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" exitCode=0 Jan 26 23:41:50 crc kubenswrapper[4995]: I0126 23:41:50.962496 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb"} Jan 26 23:41:51 crc kubenswrapper[4995]: I0126 23:41:51.972918 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerStarted","Data":"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b"} Jan 26 23:41:52 crc kubenswrapper[4995]: I0126 23:41:52.000257 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gljh7" podStartSLOduration=2.555375148 podStartE2EDuration="5.000237833s" podCreationTimestamp="2026-01-26 23:41:47 +0000 UTC" firstStartedPulling="2026-01-26 23:41:48.913316671 +0000 UTC m=+2013.078024136" lastFinishedPulling="2026-01-26 23:41:51.358179366 +0000 UTC m=+2015.522886821" observedRunningTime="2026-01-26 23:41:51.995937206 +0000 UTC m=+2016.160644711" watchObservedRunningTime="2026-01-26 23:41:52.000237833 +0000 UTC m=+2016.164945308" Jan 26 23:41:54 crc kubenswrapper[4995]: I0126 23:41:54.997138 4995 generic.go:334] "Generic (PLEG): container finished" podID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" exitCode=0 Jan 26 23:41:54 crc kubenswrapper[4995]: I0126 23:41:54.997200 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" event={"ID":"6d19bd6c-1672-4d8d-af69-d1cda742bf83","Type":"ContainerDied","Data":"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb"} Jan 26 23:41:54 crc kubenswrapper[4995]: I0126 23:41:54.998004 4995 scope.go:117] "RemoveContainer" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:41:55 crc kubenswrapper[4995]: I0126 23:41:55.509802 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpk7x_must-gather-7f9z4_6d19bd6c-1672-4d8d-af69-d1cda742bf83/gather/0.log" Jan 26 23:41:57 crc kubenswrapper[4995]: I0126 23:41:57.477806 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:57 crc kubenswrapper[4995]: I0126 23:41:57.477859 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:41:58 crc kubenswrapper[4995]: I0126 23:41:58.537853 4995 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gljh7" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" probeResult="failure" output=< Jan 26 23:41:58 crc kubenswrapper[4995]: timeout: failed to connect service ":50051" within 1s Jan 26 23:41:58 crc kubenswrapper[4995]: > Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.208771 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.209533 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" containerID="cri-o://dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" gracePeriod=2 Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.218913 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kpk7x/must-gather-7f9z4"] Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.633968 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpk7x_must-gather-7f9z4_6d19bd6c-1672-4d8d-af69-d1cda742bf83/copy/0.log" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.634663 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.733667 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") pod \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.733738 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") pod \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\" (UID: \"6d19bd6c-1672-4d8d-af69-d1cda742bf83\") " Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.739416 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr" (OuterVolumeSpecName: "kube-api-access-jqtdr") pod "6d19bd6c-1672-4d8d-af69-d1cda742bf83" (UID: "6d19bd6c-1672-4d8d-af69-d1cda742bf83"). InnerVolumeSpecName "kube-api-access-jqtdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.835836 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqtdr\" (UniqueName: \"kubernetes.io/projected/6d19bd6c-1672-4d8d-af69-d1cda742bf83-kube-api-access-jqtdr\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.858554 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "6d19bd6c-1672-4d8d-af69-d1cda742bf83" (UID: "6d19bd6c-1672-4d8d-af69-d1cda742bf83"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:42:04 crc kubenswrapper[4995]: I0126 23:42:04.937627 4995 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/6d19bd6c-1672-4d8d-af69-d1cda742bf83-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.079782 4995 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kpk7x_must-gather-7f9z4_6d19bd6c-1672-4d8d-af69-d1cda742bf83/copy/0.log" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.080139 4995 generic.go:334] "Generic (PLEG): container finished" podID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" exitCode=143 Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.080198 4995 scope.go:117] "RemoveContainer" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.080345 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kpk7x/must-gather-7f9z4" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.111741 4995 scope.go:117] "RemoveContainer" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184024 4995 scope.go:117] "RemoveContainer" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" Jan 26 23:42:05 crc kubenswrapper[4995]: E0126 23:42:05.184468 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025\": container with ID starting with dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025 not found: ID does not exist" containerID="dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184509 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025"} err="failed to get container status \"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025\": rpc error: code = NotFound desc = could not find container \"dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025\": container with ID starting with dc96ac51c09434469d96fd1398965d247b9d4b2104abce01b9ad007e68445025 not found: ID does not exist" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184535 4995 scope.go:117] "RemoveContainer" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:42:05 crc kubenswrapper[4995]: E0126 23:42:05.184803 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb\": container with ID starting with 68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb not found: ID does not exist" containerID="68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb" Jan 26 23:42:05 crc kubenswrapper[4995]: I0126 23:42:05.184835 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb"} err="failed to get container status \"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb\": rpc error: code = NotFound desc = could not find container \"68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb\": container with ID starting with 68e13f9947eb9137473ce5e520fe018e29dad4122c180f3096848b6abc978ccb not found: ID does not exist" Jan 26 23:42:06 crc kubenswrapper[4995]: I0126 23:42:06.528053 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" path="/var/lib/kubelet/pods/6d19bd6c-1672-4d8d-af69-d1cda742bf83/volumes" Jan 26 23:42:07 crc kubenswrapper[4995]: I0126 23:42:07.536551 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:07 crc kubenswrapper[4995]: I0126 23:42:07.586786 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.123330 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.124149 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gljh7" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" containerID="cri-o://16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" gracePeriod=2 Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.634609 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.785177 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") pod \"c3de82ef-2dae-4204-9b32-878af19e4055\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.785395 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") pod \"c3de82ef-2dae-4204-9b32-878af19e4055\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.785461 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") pod \"c3de82ef-2dae-4204-9b32-878af19e4055\" (UID: \"c3de82ef-2dae-4204-9b32-878af19e4055\") " Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.786229 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities" (OuterVolumeSpecName: "utilities") pod "c3de82ef-2dae-4204-9b32-878af19e4055" (UID: "c3de82ef-2dae-4204-9b32-878af19e4055"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.801346 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6" (OuterVolumeSpecName: "kube-api-access-dxrd6") pod "c3de82ef-2dae-4204-9b32-878af19e4055" (UID: "c3de82ef-2dae-4204-9b32-878af19e4055"). InnerVolumeSpecName "kube-api-access-dxrd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.887362 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxrd6\" (UniqueName: \"kubernetes.io/projected/c3de82ef-2dae-4204-9b32-878af19e4055-kube-api-access-dxrd6\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.887885 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.917701 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3de82ef-2dae-4204-9b32-878af19e4055" (UID: "c3de82ef-2dae-4204-9b32-878af19e4055"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:42:11 crc kubenswrapper[4995]: I0126 23:42:11.989786 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3de82ef-2dae-4204-9b32-878af19e4055-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152153 4995 generic.go:334] "Generic (PLEG): container finished" podID="c3de82ef-2dae-4204-9b32-878af19e4055" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" exitCode=0 Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152216 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b"} Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152255 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gljh7" event={"ID":"c3de82ef-2dae-4204-9b32-878af19e4055","Type":"ContainerDied","Data":"f2a57a01c6fd27a753d1502c47deb5d11a61ac11fc497d91ee3965247570c99b"} Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152285 4995 scope.go:117] "RemoveContainer" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.152508 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gljh7" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.198672 4995 scope.go:117] "RemoveContainer" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.250853 4995 scope.go:117] "RemoveContainer" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.258294 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.275462 4995 scope.go:117] "RemoveContainer" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" Jan 26 23:42:12 crc kubenswrapper[4995]: E0126 23:42:12.275952 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b\": container with ID starting with 16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b not found: ID does not exist" containerID="16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.276220 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b"} err="failed to get container status \"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b\": rpc error: code = NotFound desc = could not find container \"16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b\": container with ID starting with 16389d93c1ac3569b077a102a252681ff4380f59cf4ddadc93cd5ccaf9854f5b not found: ID does not exist" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.276407 4995 scope.go:117] "RemoveContainer" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" Jan 26 23:42:12 crc kubenswrapper[4995]: E0126 23:42:12.277754 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb\": container with ID starting with 59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb not found: ID does not exist" containerID="59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.278050 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb"} err="failed to get container status \"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb\": rpc error: code = NotFound desc = could not find container \"59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb\": container with ID starting with 59b15dabbab226cb0f82ca7b2b825f24949ac243fde9291df5a7d15cfbeed1cb not found: ID does not exist" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.278361 4995 scope.go:117] "RemoveContainer" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" Jan 26 23:42:12 crc kubenswrapper[4995]: E0126 23:42:12.278986 4995 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3\": container with ID starting with 3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3 not found: ID does not exist" containerID="3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.279218 4995 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3"} err="failed to get container status \"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3\": rpc error: code = NotFound desc = could not find container \"3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3\": container with ID starting with 3d50eadf440a994b25275065404d43e4b0db0a7f9802d2d1e0684b5b133a56d3 not found: ID does not exist" Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.282144 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gljh7"] Jan 26 23:42:12 crc kubenswrapper[4995]: I0126 23:42:12.529056 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" path="/var/lib/kubelet/pods/c3de82ef-2dae-4204-9b32-878af19e4055/volumes" Jan 26 23:42:21 crc kubenswrapper[4995]: I0126 23:42:21.296024 4995 scope.go:117] "RemoveContainer" containerID="1582e84b9afefe2ee6063a8f17ab45c4317bc68064db6d3d6e513c3859811183" Jan 26 23:42:21 crc kubenswrapper[4995]: I0126 23:42:21.336854 4995 scope.go:117] "RemoveContainer" containerID="75791934fa81195c3b5b4a00cd7de4aeb20bba8ee707df60b935a30d47992dd2" Jan 26 23:43:40 crc kubenswrapper[4995]: I0126 23:43:40.894181 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:43:40 crc kubenswrapper[4995]: I0126 23:43:40.894862 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.931940 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932766 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-content" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932782 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-content" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932799 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932807 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932820 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="gather" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932828 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="gather" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932837 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932845 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" Jan 26 23:43:49 crc kubenswrapper[4995]: E0126 23:43:49.932872 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-utilities" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.932880 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="extract-utilities" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.933043 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3de82ef-2dae-4204-9b32-878af19e4055" containerName="registry-server" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.933070 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="copy" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.933081 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d19bd6c-1672-4d8d-af69-d1cda742bf83" containerName="gather" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.934411 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.953860 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.975748 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.975837 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:49 crc kubenswrapper[4995]: I0126 23:43:49.976000 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.077585 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.077718 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.077742 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.078159 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.078386 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.107911 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"redhat-marketplace-w8dvg\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.294792 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:43:50 crc kubenswrapper[4995]: I0126 23:43:50.772072 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:43:51 crc kubenswrapper[4995]: I0126 23:43:51.043516 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerStarted","Data":"dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32"} Jan 26 23:43:51 crc kubenswrapper[4995]: I0126 23:43:51.043568 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerStarted","Data":"85fb938305699f9b1455e4e1b9ac4f403aa0c5749107fd53e4320f4161c82bc8"} Jan 26 23:43:52 crc kubenswrapper[4995]: I0126 23:43:52.057226 4995 generic.go:334] "Generic (PLEG): container finished" podID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerID="dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32" exitCode=0 Jan 26 23:43:52 crc kubenswrapper[4995]: I0126 23:43:52.057354 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32"} Jan 26 23:43:53 crc kubenswrapper[4995]: I0126 23:43:53.073213 4995 generic.go:334] "Generic (PLEG): container finished" podID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerID="a12caeb671986b2e4e72da6722706674669ff02071d97c0a7fea27ba820a1040" exitCode=0 Jan 26 23:43:53 crc kubenswrapper[4995]: I0126 23:43:53.073327 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"a12caeb671986b2e4e72da6722706674669ff02071d97c0a7fea27ba820a1040"} Jan 26 23:43:54 crc kubenswrapper[4995]: I0126 23:43:54.087880 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerStarted","Data":"844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2"} Jan 26 23:43:54 crc kubenswrapper[4995]: I0126 23:43:54.108249 4995 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8dvg" podStartSLOduration=3.715322095 podStartE2EDuration="5.108223135s" podCreationTimestamp="2026-01-26 23:43:49 +0000 UTC" firstStartedPulling="2026-01-26 23:43:52.060184092 +0000 UTC m=+2136.224891587" lastFinishedPulling="2026-01-26 23:43:53.453085122 +0000 UTC m=+2137.617792627" observedRunningTime="2026-01-26 23:43:54.105919182 +0000 UTC m=+2138.270626667" watchObservedRunningTime="2026-01-26 23:43:54.108223135 +0000 UTC m=+2138.272930610" Jan 26 23:44:00 crc kubenswrapper[4995]: I0126 23:44:00.296167 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:00 crc kubenswrapper[4995]: I0126 23:44:00.296819 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:00 crc kubenswrapper[4995]: I0126 23:44:00.363851 4995 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:01 crc kubenswrapper[4995]: I0126 23:44:01.228783 4995 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:03 crc kubenswrapper[4995]: I0126 23:44:03.920445 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:44:03 crc kubenswrapper[4995]: I0126 23:44:03.921295 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w8dvg" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" containerID="cri-o://844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2" gracePeriod=2 Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.187138 4995 generic.go:334] "Generic (PLEG): container finished" podID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerID="844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2" exitCode=0 Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.187183 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2"} Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.421058 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.438072 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") pod \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.438174 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") pod \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.438214 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") pod \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\" (UID: \"56abab47-51ea-48a0-a595-6a34b4e0ba6a\") " Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.440475 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities" (OuterVolumeSpecName: "utilities") pod "56abab47-51ea-48a0-a595-6a34b4e0ba6a" (UID: "56abab47-51ea-48a0-a595-6a34b4e0ba6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.445955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps" (OuterVolumeSpecName: "kube-api-access-l9zps") pod "56abab47-51ea-48a0-a595-6a34b4e0ba6a" (UID: "56abab47-51ea-48a0-a595-6a34b4e0ba6a"). InnerVolumeSpecName "kube-api-access-l9zps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.464015 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56abab47-51ea-48a0-a595-6a34b4e0ba6a" (UID: "56abab47-51ea-48a0-a595-6a34b4e0ba6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.539668 4995 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.539928 4995 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56abab47-51ea-48a0-a595-6a34b4e0ba6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 23:44:04 crc kubenswrapper[4995]: I0126 23:44:04.540064 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zps\" (UniqueName: \"kubernetes.io/projected/56abab47-51ea-48a0-a595-6a34b4e0ba6a-kube-api-access-l9zps\") on node \"crc\" DevicePath \"\"" Jan 26 23:44:04 crc kubenswrapper[4995]: E0126 23:44:04.696131 4995 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56abab47_51ea_48a0_a595_6a34b4e0ba6a.slice/crio-85fb938305699f9b1455e4e1b9ac4f403aa0c5749107fd53e4320f4161c82bc8\": RecentStats: unable to find data in memory cache]" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.202175 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8dvg" event={"ID":"56abab47-51ea-48a0-a595-6a34b4e0ba6a","Type":"ContainerDied","Data":"85fb938305699f9b1455e4e1b9ac4f403aa0c5749107fd53e4320f4161c82bc8"} Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.202260 4995 scope.go:117] "RemoveContainer" containerID="844d332c60cb39e6b59bc8b8828a4406f2165d9145e463c4983cfa44852434a2" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.203412 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8dvg" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.237692 4995 scope.go:117] "RemoveContainer" containerID="a12caeb671986b2e4e72da6722706674669ff02071d97c0a7fea27ba820a1040" Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.242406 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.267127 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8dvg"] Jan 26 23:44:05 crc kubenswrapper[4995]: I0126 23:44:05.284057 4995 scope.go:117] "RemoveContainer" containerID="dc49137d78bc383af2f2f498f749c9f5c6b4f8acb9856168849a631a19561f32" Jan 26 23:44:06 crc kubenswrapper[4995]: I0126 23:44:06.537186 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" path="/var/lib/kubelet/pods/56abab47-51ea-48a0-a595-6a34b4e0ba6a/volumes" Jan 26 23:44:10 crc kubenswrapper[4995]: I0126 23:44:10.893448 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:44:10 crc kubenswrapper[4995]: I0126 23:44:10.893831 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.893153 4995 patch_prober.go:28] interesting pod/machine-config-daemon-sj7pr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.893817 4995 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.893883 4995 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.894740 4995 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061"} pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 23:44:40 crc kubenswrapper[4995]: I0126 23:44:40.894822 4995 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" podUID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerName="machine-config-daemon" containerID="cri-o://4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061" gracePeriod=600 Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547172 4995 generic.go:334] "Generic (PLEG): container finished" podID="09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4" containerID="4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061" exitCode=0 Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547266 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerDied","Data":"4acaaa2359dd7eaaa1880a32b4db4f9439b498f50ad90d55e3ac94e735bc5061"} Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547861 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-sj7pr" event={"ID":"09bf7e2f-e26c-4d22-8065-c4f64e6e6ec4","Type":"ContainerStarted","Data":"d4b45e4d8deb9701b6136e54a5fbffce80682787350892626a74607b53b30960"} Jan 26 23:44:41 crc kubenswrapper[4995]: I0126 23:44:41.547891 4995 scope.go:117] "RemoveContainer" containerID="dab10de9cb85349ba78086c04b9cdd40a6b3740002a10609ce0efb97633be1e8" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.161723 4995 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh"] Jan 26 23:45:00 crc kubenswrapper[4995]: E0126 23:45:00.162563 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-utilities" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162575 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-utilities" Jan 26 23:45:00 crc kubenswrapper[4995]: E0126 23:45:00.162589 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162595 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" Jan 26 23:45:00 crc kubenswrapper[4995]: E0126 23:45:00.162618 4995 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-content" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162625 4995 state_mem.go:107] "Deleted CPUSet assignment" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="extract-content" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.162769 4995 memory_manager.go:354] "RemoveStaleState removing state" podUID="56abab47-51ea-48a0-a595-6a34b4e0ba6a" containerName="registry-server" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.163388 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.166049 4995 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.166543 4995 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.215157 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh"] Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.261965 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.262259 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.262294 4995 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.363932 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.363984 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.364010 4995 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.365932 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.373997 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.382799 4995 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"collect-profiles-29491185-whsmh\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.486500 4995 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:00 crc kubenswrapper[4995]: I0126 23:45:00.914341 4995 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh"] Jan 26 23:45:01 crc kubenswrapper[4995]: I0126 23:45:01.712938 4995 generic.go:334] "Generic (PLEG): container finished" podID="a0bdea8f-8192-42ae-a341-c4db0996136d" containerID="9b845987076a1ade135f1c53c3e851c31499abee1f7c290929b571d63bed551f" exitCode=0 Jan 26 23:45:01 crc kubenswrapper[4995]: I0126 23:45:01.713042 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" event={"ID":"a0bdea8f-8192-42ae-a341-c4db0996136d","Type":"ContainerDied","Data":"9b845987076a1ade135f1c53c3e851c31499abee1f7c290929b571d63bed551f"} Jan 26 23:45:01 crc kubenswrapper[4995]: I0126 23:45:01.714683 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" event={"ID":"a0bdea8f-8192-42ae-a341-c4db0996136d","Type":"ContainerStarted","Data":"3e90300d645223f52dbdc51fd905cab8a8699a961aba3a43cc386be5e80c6d2f"} Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.134739 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.230260 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") pod \"a0bdea8f-8192-42ae-a341-c4db0996136d\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.230360 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") pod \"a0bdea8f-8192-42ae-a341-c4db0996136d\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.230536 4995 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") pod \"a0bdea8f-8192-42ae-a341-c4db0996136d\" (UID: \"a0bdea8f-8192-42ae-a341-c4db0996136d\") " Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.231092 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume" (OuterVolumeSpecName: "config-volume") pod "a0bdea8f-8192-42ae-a341-c4db0996136d" (UID: "a0bdea8f-8192-42ae-a341-c4db0996136d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.231721 4995 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0bdea8f-8192-42ae-a341-c4db0996136d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.235950 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a0bdea8f-8192-42ae-a341-c4db0996136d" (UID: "a0bdea8f-8192-42ae-a341-c4db0996136d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.235955 4995 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm" (OuterVolumeSpecName: "kube-api-access-rfxcm") pod "a0bdea8f-8192-42ae-a341-c4db0996136d" (UID: "a0bdea8f-8192-42ae-a341-c4db0996136d"). InnerVolumeSpecName "kube-api-access-rfxcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.333842 4995 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfxcm\" (UniqueName: \"kubernetes.io/projected/a0bdea8f-8192-42ae-a341-c4db0996136d-kube-api-access-rfxcm\") on node \"crc\" DevicePath \"\"" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.333882 4995 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a0bdea8f-8192-42ae-a341-c4db0996136d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.734722 4995 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" event={"ID":"a0bdea8f-8192-42ae-a341-c4db0996136d","Type":"ContainerDied","Data":"3e90300d645223f52dbdc51fd905cab8a8699a961aba3a43cc386be5e80c6d2f"} Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.734767 4995 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e90300d645223f52dbdc51fd905cab8a8699a961aba3a43cc386be5e80c6d2f" Jan 26 23:45:03 crc kubenswrapper[4995]: I0126 23:45:03.734793 4995 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29491185-whsmh" Jan 26 23:45:04 crc kubenswrapper[4995]: I0126 23:45:04.228950 4995 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:45:04 crc kubenswrapper[4995]: I0126 23:45:04.246731 4995 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29491140-x67tv"] Jan 26 23:45:04 crc kubenswrapper[4995]: I0126 23:45:04.531547 4995 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7de4fe23-2da4-47df-a68b-d6d5148ab964" path="/var/lib/kubelet/pods/7de4fe23-2da4-47df-a68b-d6d5148ab964/volumes"